00:00:00.001 Started by upstream project "autotest-per-patch" build number 130568 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.024 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.025 The recommended git tool is: git 00:00:00.025 using credential 00000000-0000-0000-0000-000000000002 00:00:00.027 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.047 Fetching changes from the remote Git repository 00:00:00.050 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.071 Using shallow fetch with depth 1 00:00:00.071 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.071 > git --version # timeout=10 00:00:00.109 > git --version # 'git version 2.39.2' 00:00:00.110 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.151 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.151 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.859 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.872 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.886 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:02.886 > git config core.sparsecheckout # timeout=10 00:00:02.899 > git read-tree -mu HEAD # timeout=10 00:00:02.915 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:02.937 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:02.937 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:03.047 [Pipeline] Start of Pipeline 00:00:03.062 [Pipeline] library 00:00:03.064 Loading library shm_lib@master 00:00:03.064 Library shm_lib@master is cached. Copying from home. 00:00:03.080 [Pipeline] node 00:00:03.089 Running on VM-host-SM38 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.091 [Pipeline] { 00:00:03.104 [Pipeline] catchError 00:00:03.107 [Pipeline] { 00:00:03.121 [Pipeline] wrap 00:00:03.132 [Pipeline] { 00:00:03.143 [Pipeline] stage 00:00:03.146 [Pipeline] { (Prologue) 00:00:03.167 [Pipeline] echo 00:00:03.169 Node: VM-host-SM38 00:00:03.175 [Pipeline] cleanWs 00:00:03.183 [WS-CLEANUP] Deleting project workspace... 00:00:03.183 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.189 [WS-CLEANUP] done 00:00:03.381 [Pipeline] setCustomBuildProperty 00:00:03.465 [Pipeline] httpRequest 00:00:03.862 [Pipeline] echo 00:00:03.863 Sorcerer 10.211.164.101 is alive 00:00:03.870 [Pipeline] retry 00:00:03.871 [Pipeline] { 00:00:03.880 [Pipeline] httpRequest 00:00:03.883 HttpMethod: GET 00:00:03.884 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:03.884 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:03.885 Response Code: HTTP/1.1 200 OK 00:00:03.886 Success: Status code 200 is in the accepted range: 200,404 00:00:03.886 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:04.031 [Pipeline] } 00:00:04.048 [Pipeline] // retry 00:00:04.055 [Pipeline] sh 00:00:04.330 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:04.344 [Pipeline] httpRequest 00:00:04.680 [Pipeline] echo 00:00:04.682 Sorcerer 10.211.164.101 is alive 00:00:04.692 [Pipeline] retry 00:00:04.694 [Pipeline] { 00:00:04.708 [Pipeline] httpRequest 00:00:04.713 HttpMethod: GET 00:00:04.714 URL: http://10.211.164.101/packages/spdk_1c027d3563632a047e728d198e6a99b59e27c669.tar.gz 00:00:04.714 Sending request to url: http://10.211.164.101/packages/spdk_1c027d3563632a047e728d198e6a99b59e27c669.tar.gz 00:00:04.715 Response Code: HTTP/1.1 200 OK 00:00:04.715 Success: Status code 200 is in the accepted range: 200,404 00:00:04.716 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_1c027d3563632a047e728d198e6a99b59e27c669.tar.gz 00:00:21.285 [Pipeline] } 00:00:21.305 [Pipeline] // retry 00:00:21.313 [Pipeline] sh 00:00:21.591 + tar --no-same-owner -xf spdk_1c027d3563632a047e728d198e6a99b59e27c669.tar.gz 00:00:24.874 [Pipeline] sh 00:00:25.148 + git -C spdk log --oneline -n5 00:00:25.148 1c027d356 bdev_xnvme: add support for dataset management 00:00:25.148 447520417 xnvme: bump to 0.7.5 00:00:25.148 e9b861378 lib/iscsi: Fix: Unregister logout timer 00:00:25.148 081f43f2b lib/nvmf: Fix memory leak in nvmf_bdev_ctrlr_unmap 00:00:25.148 daeaec816 test/unit: remove unneeded MOCKs from ftl unit tests 00:00:25.163 [Pipeline] writeFile 00:00:25.173 [Pipeline] sh 00:00:25.477 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:25.514 [Pipeline] sh 00:00:25.789 + cat autorun-spdk.conf 00:00:25.790 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.790 SPDK_RUN_ASAN=1 00:00:25.790 SPDK_RUN_UBSAN=1 00:00:25.790 SPDK_TEST_RAID=1 00:00:25.790 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:25.795 RUN_NIGHTLY=0 00:00:25.796 [Pipeline] } 00:00:25.805 [Pipeline] // stage 00:00:25.815 [Pipeline] stage 00:00:25.817 [Pipeline] { (Run VM) 00:00:25.826 [Pipeline] sh 00:00:26.111 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:26.111 + echo 'Start stage prepare_nvme.sh' 00:00:26.111 Start stage prepare_nvme.sh 00:00:26.111 + [[ -n 10 ]] 00:00:26.111 + disk_prefix=ex10 00:00:26.111 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:26.111 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:26.111 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:26.111 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.111 ++ SPDK_RUN_ASAN=1 00:00:26.111 ++ SPDK_RUN_UBSAN=1 00:00:26.111 ++ SPDK_TEST_RAID=1 00:00:26.111 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:26.111 ++ RUN_NIGHTLY=0 00:00:26.111 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:26.111 + nvme_files=() 00:00:26.111 + declare -A nvme_files 00:00:26.111 + backend_dir=/var/lib/libvirt/images/backends 00:00:26.111 + nvme_files['nvme.img']=5G 00:00:26.111 + nvme_files['nvme-cmb.img']=5G 00:00:26.111 + nvme_files['nvme-multi0.img']=4G 00:00:26.111 + nvme_files['nvme-multi1.img']=4G 00:00:26.111 + nvme_files['nvme-multi2.img']=4G 00:00:26.111 + nvme_files['nvme-openstack.img']=8G 00:00:26.111 + nvme_files['nvme-zns.img']=5G 00:00:26.111 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:26.111 + (( SPDK_TEST_FTL == 1 )) 00:00:26.111 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:26.111 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:26.111 + for nvme in "${!nvme_files[@]}" 00:00:26.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:00:26.111 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.111 + for nvme in "${!nvme_files[@]}" 00:00:26.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:00:26.111 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.111 + for nvme in "${!nvme_files[@]}" 00:00:26.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:00:26.111 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:26.111 + for nvme in "${!nvme_files[@]}" 00:00:26.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:00:26.111 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.111 + for nvme in "${!nvme_files[@]}" 00:00:26.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:00:26.111 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.111 + for nvme in "${!nvme_files[@]}" 00:00:26.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:00:26.111 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.111 + for nvme in "${!nvme_files[@]}" 00:00:26.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:00:26.370 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.370 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:00:26.370 + echo 'End stage prepare_nvme.sh' 00:00:26.370 End stage prepare_nvme.sh 00:00:26.379 [Pipeline] sh 00:00:26.656 + DISTRO=fedora39 00:00:26.656 + CPUS=10 00:00:26.656 + RAM=12288 00:00:26.656 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:26.656 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -H -a -v -f fedora39 00:00:26.656 00:00:26.656 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:26.656 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:26.656 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:26.656 HELP=0 00:00:26.656 DRY_RUN=0 00:00:26.656 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img, 00:00:26.656 NVME_DISKS_TYPE=nvme,nvme, 00:00:26.656 NVME_AUTO_CREATE=0 00:00:26.656 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img, 00:00:26.656 NVME_CMB=,, 00:00:26.656 NVME_PMR=,, 00:00:26.656 NVME_ZNS=,, 00:00:26.656 NVME_MS=,, 00:00:26.656 NVME_FDP=,, 00:00:26.656 SPDK_VAGRANT_DISTRO=fedora39 00:00:26.656 SPDK_VAGRANT_VMCPU=10 00:00:26.656 SPDK_VAGRANT_VMRAM=12288 00:00:26.656 SPDK_VAGRANT_PROVIDER=libvirt 00:00:26.656 SPDK_VAGRANT_HTTP_PROXY= 00:00:26.656 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:26.656 SPDK_OPENSTACK_NETWORK=0 00:00:26.656 VAGRANT_PACKAGE_BOX=0 00:00:26.656 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:26.656 FORCE_DISTRO=true 00:00:26.656 VAGRANT_BOX_VERSION= 00:00:26.656 EXTRA_VAGRANTFILES= 00:00:26.656 NIC_MODEL=e1000 00:00:26.656 00:00:26.656 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:26.656 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:28.559 Bringing machine 'default' up with 'libvirt' provider... 00:00:29.123 ==> default: Creating image (snapshot of base box volume). 00:00:29.123 ==> default: Creating domain with the following settings... 00:00:29.123 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727792720_84eea110b16ef46ec887 00:00:29.123 ==> default: -- Domain type: kvm 00:00:29.123 ==> default: -- Cpus: 10 00:00:29.123 ==> default: -- Feature: acpi 00:00:29.123 ==> default: -- Feature: apic 00:00:29.123 ==> default: -- Feature: pae 00:00:29.123 ==> default: -- Memory: 12288M 00:00:29.123 ==> default: -- Memory Backing: hugepages: 00:00:29.123 ==> default: -- Management MAC: 00:00:29.123 ==> default: -- Loader: 00:00:29.123 ==> default: -- Nvram: 00:00:29.123 ==> default: -- Base box: spdk/fedora39 00:00:29.123 ==> default: -- Storage pool: default 00:00:29.123 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727792720_84eea110b16ef46ec887.img (20G) 00:00:29.123 ==> default: -- Volume Cache: default 00:00:29.123 ==> default: -- Kernel: 00:00:29.123 ==> default: -- Initrd: 00:00:29.123 ==> default: -- Graphics Type: vnc 00:00:29.123 ==> default: -- Graphics Port: -1 00:00:29.124 ==> default: -- Graphics IP: 127.0.0.1 00:00:29.124 ==> default: -- Graphics Password: Not defined 00:00:29.124 ==> default: -- Video Type: cirrus 00:00:29.124 ==> default: -- Video VRAM: 9216 00:00:29.124 ==> default: -- Sound Type: 00:00:29.124 ==> default: -- Keymap: en-us 00:00:29.124 ==> default: -- TPM Path: 00:00:29.124 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:29.124 ==> default: -- Command line args: 00:00:29.124 ==> default: -> value=-device, 00:00:29.124 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:29.124 ==> default: -> value=-drive, 00:00:29.124 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-0-drive0, 00:00:29.124 ==> default: -> value=-device, 00:00:29.124 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.124 ==> default: -> value=-device, 00:00:29.124 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:29.124 ==> default: -> value=-drive, 00:00:29.124 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:29.124 ==> default: -> value=-device, 00:00:29.124 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.124 ==> default: -> value=-drive, 00:00:29.124 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:29.124 ==> default: -> value=-device, 00:00:29.124 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.124 ==> default: -> value=-drive, 00:00:29.124 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:29.124 ==> default: -> value=-device, 00:00:29.124 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.384 ==> default: Creating shared folders metadata... 00:00:29.384 ==> default: Starting domain. 00:00:30.814 ==> default: Waiting for domain to get an IP address... 00:00:48.919 ==> default: Waiting for SSH to become available... 00:00:48.919 ==> default: Configuring and enabling network interfaces... 00:00:52.200 default: SSH address: 192.168.121.4:22 00:00:52.200 default: SSH username: vagrant 00:00:52.200 default: SSH auth method: private key 00:00:54.116 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:00.710 ==> default: Mounting SSHFS shared folder... 00:01:01.643 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:01.643 ==> default: Checking Mount.. 00:01:03.014 ==> default: Folder Successfully Mounted! 00:01:03.014 00:01:03.014 SUCCESS! 00:01:03.014 00:01:03.014 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:03.014 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:03.014 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:03.014 00:01:03.023 [Pipeline] } 00:01:03.038 [Pipeline] // stage 00:01:03.047 [Pipeline] dir 00:01:03.048 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:03.050 [Pipeline] { 00:01:03.065 [Pipeline] catchError 00:01:03.067 [Pipeline] { 00:01:03.081 [Pipeline] sh 00:01:03.358 + vagrant ssh-config --host vagrant 00:01:03.358 + sed -ne '/^Host/,$p' 00:01:03.358 + tee ssh_conf 00:01:05.888 Host vagrant 00:01:05.888 HostName 192.168.121.4 00:01:05.888 User vagrant 00:01:05.888 Port 22 00:01:05.888 UserKnownHostsFile /dev/null 00:01:05.888 StrictHostKeyChecking no 00:01:05.888 PasswordAuthentication no 00:01:05.888 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:05.888 IdentitiesOnly yes 00:01:05.888 LogLevel FATAL 00:01:05.888 ForwardAgent yes 00:01:05.888 ForwardX11 yes 00:01:05.888 00:01:05.902 [Pipeline] withEnv 00:01:05.904 [Pipeline] { 00:01:05.916 [Pipeline] sh 00:01:06.199 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:06.199 source /etc/os-release 00:01:06.199 [[ -e /image.version ]] && img=$(< /image.version) 00:01:06.199 # Minimal, systemd-like check. 00:01:06.199 if [[ -e /.dockerenv ]]; then 00:01:06.199 # Clear garbage from the node'\''s name: 00:01:06.199 # agt-er_autotest_547-896 -> autotest_547-896 00:01:06.199 # $HOSTNAME is the actual container id 00:01:06.199 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:06.199 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:06.199 # We can assume this is a mount from a host where container is running, 00:01:06.199 # so fetch its hostname to easily identify the target swarm worker. 00:01:06.199 container="$(< /etc/hostname) ($agent)" 00:01:06.199 else 00:01:06.199 # Fallback 00:01:06.199 container=$agent 00:01:06.199 fi 00:01:06.199 fi 00:01:06.199 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:06.199 ' 00:01:06.468 [Pipeline] } 00:01:06.484 [Pipeline] // withEnv 00:01:06.492 [Pipeline] setCustomBuildProperty 00:01:06.507 [Pipeline] stage 00:01:06.510 [Pipeline] { (Tests) 00:01:06.529 [Pipeline] sh 00:01:06.805 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:06.817 [Pipeline] sh 00:01:07.095 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:07.111 [Pipeline] timeout 00:01:07.111 Timeout set to expire in 1 hr 30 min 00:01:07.114 [Pipeline] { 00:01:07.130 [Pipeline] sh 00:01:07.439 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:07.697 HEAD is now at 1c027d356 bdev_xnvme: add support for dataset management 00:01:07.711 [Pipeline] sh 00:01:07.988 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:08.000 [Pipeline] sh 00:01:08.275 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:08.290 [Pipeline] sh 00:01:08.569 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo' 00:01:08.569 ++ readlink -f spdk_repo 00:01:08.569 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:08.569 + [[ -n /home/vagrant/spdk_repo ]] 00:01:08.569 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:08.569 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:08.569 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:08.569 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:08.569 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:08.569 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:08.569 + cd /home/vagrant/spdk_repo 00:01:08.569 + source /etc/os-release 00:01:08.569 ++ NAME='Fedora Linux' 00:01:08.569 ++ VERSION='39 (Cloud Edition)' 00:01:08.569 ++ ID=fedora 00:01:08.570 ++ VERSION_ID=39 00:01:08.570 ++ VERSION_CODENAME= 00:01:08.570 ++ PLATFORM_ID=platform:f39 00:01:08.570 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:08.570 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:08.570 ++ LOGO=fedora-logo-icon 00:01:08.570 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:08.570 ++ HOME_URL=https://fedoraproject.org/ 00:01:08.570 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:08.570 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:08.570 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:08.570 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:08.570 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:08.570 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:08.570 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:08.570 ++ SUPPORT_END=2024-11-12 00:01:08.570 ++ VARIANT='Cloud Edition' 00:01:08.570 ++ VARIANT_ID=cloud 00:01:08.570 + uname -a 00:01:08.570 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:08.570 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:09.133 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:09.133 Hugepages 00:01:09.133 node hugesize free / total 00:01:09.133 node0 1048576kB 0 / 0 00:01:09.133 node0 2048kB 0 / 0 00:01:09.133 00:01:09.133 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:09.133 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:09.133 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:09.133 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:09.133 + rm -f /tmp/spdk-ld-path 00:01:09.133 + source autorun-spdk.conf 00:01:09.133 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.133 ++ SPDK_RUN_ASAN=1 00:01:09.133 ++ SPDK_RUN_UBSAN=1 00:01:09.133 ++ SPDK_TEST_RAID=1 00:01:09.133 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.133 ++ RUN_NIGHTLY=0 00:01:09.133 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:09.133 + [[ -n '' ]] 00:01:09.133 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:09.133 + for M in /var/spdk/build-*-manifest.txt 00:01:09.133 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:09.133 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:09.133 + for M in /var/spdk/build-*-manifest.txt 00:01:09.133 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:09.133 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:09.133 + for M in /var/spdk/build-*-manifest.txt 00:01:09.133 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:09.133 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:09.133 ++ uname 00:01:09.133 + [[ Linux == \L\i\n\u\x ]] 00:01:09.133 + sudo dmesg -T 00:01:09.133 + sudo dmesg --clear 00:01:09.133 + dmesg_pid=4987 00:01:09.133 + sudo dmesg -Tw 00:01:09.133 + [[ Fedora Linux == FreeBSD ]] 00:01:09.133 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.133 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.133 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:09.133 + [[ -x /usr/src/fio-static/fio ]] 00:01:09.133 + export FIO_BIN=/usr/src/fio-static/fio 00:01:09.133 + FIO_BIN=/usr/src/fio-static/fio 00:01:09.133 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:09.133 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:09.133 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:09.133 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.133 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.133 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:09.133 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.133 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.133 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:09.390 Test configuration: 00:01:09.390 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.390 SPDK_RUN_ASAN=1 00:01:09.390 SPDK_RUN_UBSAN=1 00:01:09.390 SPDK_TEST_RAID=1 00:01:09.390 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.390 RUN_NIGHTLY=0 14:26:00 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:09.390 14:26:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:09.390 14:26:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:09.390 14:26:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:09.390 14:26:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:09.390 14:26:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:09.390 14:26:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.390 14:26:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.390 14:26:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.390 14:26:00 -- paths/export.sh@5 -- $ export PATH 00:01:09.390 14:26:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.390 14:26:00 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:09.390 14:26:00 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:09.648 14:26:01 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727792761.XXXXXX 00:01:09.648 14:26:01 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727792761.WPNNOP 00:01:09.648 14:26:01 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:09.648 14:26:01 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:01:09.648 14:26:01 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:09.648 14:26:01 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:09.648 14:26:01 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:09.648 14:26:01 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:09.648 14:26:01 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:09.648 14:26:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.649 14:26:01 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:09.649 14:26:01 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:09.649 14:26:01 -- pm/common@17 -- $ local monitor 00:01:09.649 14:26:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.649 14:26:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.649 14:26:01 -- pm/common@25 -- $ sleep 1 00:01:09.649 14:26:01 -- pm/common@21 -- $ date +%s 00:01:09.649 14:26:01 -- pm/common@21 -- $ date +%s 00:01:09.649 14:26:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727792761 00:01:09.649 14:26:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727792761 00:01:09.649 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727792761_collect-cpu-load.pm.log 00:01:09.649 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727792761_collect-vmstat.pm.log 00:01:10.581 14:26:02 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:10.581 14:26:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:10.581 14:26:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:10.581 14:26:02 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:10.581 14:26:02 -- spdk/autobuild.sh@16 -- $ date -u 00:01:10.581 Tue Oct 1 02:26:02 PM UTC 2024 00:01:10.581 14:26:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:10.581 v25.01-pre-25-g1c027d356 00:01:10.581 14:26:02 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:10.581 14:26:02 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:10.581 14:26:02 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:10.581 14:26:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:10.581 14:26:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.581 ************************************ 00:01:10.581 START TEST asan 00:01:10.581 ************************************ 00:01:10.581 using asan 00:01:10.581 14:26:02 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:10.581 00:01:10.581 real 0m0.000s 00:01:10.581 user 0m0.000s 00:01:10.581 sys 0m0.000s 00:01:10.581 14:26:02 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:10.581 ************************************ 00:01:10.581 END TEST asan 00:01:10.581 14:26:02 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:10.581 ************************************ 00:01:10.581 14:26:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:10.581 14:26:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:10.581 14:26:02 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:10.581 14:26:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:10.581 14:26:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.581 ************************************ 00:01:10.581 START TEST ubsan 00:01:10.581 ************************************ 00:01:10.581 using ubsan 00:01:10.581 14:26:02 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:10.581 00:01:10.581 real 0m0.000s 00:01:10.581 user 0m0.000s 00:01:10.581 sys 0m0.000s 00:01:10.581 14:26:02 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:10.581 ************************************ 00:01:10.581 14:26:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:10.581 END TEST ubsan 00:01:10.581 ************************************ 00:01:10.581 14:26:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:10.581 14:26:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:10.581 14:26:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:10.581 14:26:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:10.581 14:26:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:10.581 14:26:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:10.581 14:26:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:10.581 14:26:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:10.581 14:26:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:10.839 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:10.839 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:11.097 Using 'verbs' RDMA provider 00:01:22.013 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:34.379 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:34.379 Creating mk/config.mk...done. 00:01:34.379 Creating mk/cc.flags.mk...done. 00:01:34.379 Type 'make' to build. 00:01:34.379 14:26:24 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:34.379 14:26:24 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:34.379 14:26:24 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:34.379 14:26:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.379 ************************************ 00:01:34.379 START TEST make 00:01:34.379 ************************************ 00:01:34.379 14:26:24 make -- common/autotest_common.sh@1125 -- $ make -j10 00:01:34.379 make[1]: Nothing to be done for 'all'. 00:01:44.379 The Meson build system 00:01:44.379 Version: 1.5.0 00:01:44.379 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:44.379 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:44.379 Build type: native build 00:01:44.379 Program cat found: YES (/usr/bin/cat) 00:01:44.379 Project name: DPDK 00:01:44.379 Project version: 24.03.0 00:01:44.379 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:44.379 C linker for the host machine: cc ld.bfd 2.40-14 00:01:44.379 Host machine cpu family: x86_64 00:01:44.379 Host machine cpu: x86_64 00:01:44.379 Message: ## Building in Developer Mode ## 00:01:44.379 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:44.379 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:44.379 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:44.379 Program python3 found: YES (/usr/bin/python3) 00:01:44.379 Program cat found: YES (/usr/bin/cat) 00:01:44.379 Compiler for C supports arguments -march=native: YES 00:01:44.379 Checking for size of "void *" : 8 00:01:44.379 Checking for size of "void *" : 8 (cached) 00:01:44.379 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:44.379 Library m found: YES 00:01:44.379 Library numa found: YES 00:01:44.379 Has header "numaif.h" : YES 00:01:44.379 Library fdt found: NO 00:01:44.379 Library execinfo found: NO 00:01:44.379 Has header "execinfo.h" : YES 00:01:44.379 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:44.379 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:44.379 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:44.379 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:44.379 Run-time dependency openssl found: YES 3.1.1 00:01:44.379 Run-time dependency libpcap found: YES 1.10.4 00:01:44.379 Has header "pcap.h" with dependency libpcap: YES 00:01:44.379 Compiler for C supports arguments -Wcast-qual: YES 00:01:44.379 Compiler for C supports arguments -Wdeprecated: YES 00:01:44.379 Compiler for C supports arguments -Wformat: YES 00:01:44.379 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:44.379 Compiler for C supports arguments -Wformat-security: NO 00:01:44.379 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.379 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:44.379 Compiler for C supports arguments -Wnested-externs: YES 00:01:44.379 Compiler for C supports arguments -Wold-style-definition: YES 00:01:44.379 Compiler for C supports arguments -Wpointer-arith: YES 00:01:44.379 Compiler for C supports arguments -Wsign-compare: YES 00:01:44.379 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:44.379 Compiler for C supports arguments -Wundef: YES 00:01:44.379 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.379 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:44.379 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:44.379 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.379 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:44.379 Program objdump found: YES (/usr/bin/objdump) 00:01:44.379 Compiler for C supports arguments -mavx512f: YES 00:01:44.379 Checking if "AVX512 checking" compiles: YES 00:01:44.379 Fetching value of define "__SSE4_2__" : 1 00:01:44.379 Fetching value of define "__AES__" : 1 00:01:44.379 Fetching value of define "__AVX__" : 1 00:01:44.379 Fetching value of define "__AVX2__" : 1 00:01:44.379 Fetching value of define "__AVX512BW__" : 1 00:01:44.379 Fetching value of define "__AVX512CD__" : 1 00:01:44.379 Fetching value of define "__AVX512DQ__" : 1 00:01:44.379 Fetching value of define "__AVX512F__" : 1 00:01:44.379 Fetching value of define "__AVX512VL__" : 1 00:01:44.379 Fetching value of define "__PCLMUL__" : 1 00:01:44.379 Fetching value of define "__RDRND__" : 1 00:01:44.379 Fetching value of define "__RDSEED__" : 1 00:01:44.379 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:44.379 Fetching value of define "__znver1__" : (undefined) 00:01:44.379 Fetching value of define "__znver2__" : (undefined) 00:01:44.379 Fetching value of define "__znver3__" : (undefined) 00:01:44.379 Fetching value of define "__znver4__" : (undefined) 00:01:44.379 Library asan found: YES 00:01:44.379 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:44.379 Message: lib/log: Defining dependency "log" 00:01:44.379 Message: lib/kvargs: Defining dependency "kvargs" 00:01:44.379 Message: lib/telemetry: Defining dependency "telemetry" 00:01:44.379 Library rt found: YES 00:01:44.379 Checking for function "getentropy" : NO 00:01:44.379 Message: lib/eal: Defining dependency "eal" 00:01:44.379 Message: lib/ring: Defining dependency "ring" 00:01:44.379 Message: lib/rcu: Defining dependency "rcu" 00:01:44.379 Message: lib/mempool: Defining dependency "mempool" 00:01:44.379 Message: lib/mbuf: Defining dependency "mbuf" 00:01:44.379 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:44.379 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.379 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.379 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:44.379 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:44.379 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:44.379 Compiler for C supports arguments -mpclmul: YES 00:01:44.379 Compiler for C supports arguments -maes: YES 00:01:44.379 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:44.379 Compiler for C supports arguments -mavx512bw: YES 00:01:44.379 Compiler for C supports arguments -mavx512dq: YES 00:01:44.379 Compiler for C supports arguments -mavx512vl: YES 00:01:44.379 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:44.379 Compiler for C supports arguments -mavx2: YES 00:01:44.379 Compiler for C supports arguments -mavx: YES 00:01:44.379 Message: lib/net: Defining dependency "net" 00:01:44.379 Message: lib/meter: Defining dependency "meter" 00:01:44.379 Message: lib/ethdev: Defining dependency "ethdev" 00:01:44.379 Message: lib/pci: Defining dependency "pci" 00:01:44.379 Message: lib/cmdline: Defining dependency "cmdline" 00:01:44.379 Message: lib/hash: Defining dependency "hash" 00:01:44.379 Message: lib/timer: Defining dependency "timer" 00:01:44.379 Message: lib/compressdev: Defining dependency "compressdev" 00:01:44.379 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:44.379 Message: lib/dmadev: Defining dependency "dmadev" 00:01:44.379 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:44.380 Message: lib/power: Defining dependency "power" 00:01:44.380 Message: lib/reorder: Defining dependency "reorder" 00:01:44.380 Message: lib/security: Defining dependency "security" 00:01:44.380 Has header "linux/userfaultfd.h" : YES 00:01:44.380 Has header "linux/vduse.h" : YES 00:01:44.380 Message: lib/vhost: Defining dependency "vhost" 00:01:44.380 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:44.380 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:44.380 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:44.380 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:44.380 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:44.380 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:44.380 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:44.380 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:44.380 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:44.380 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:44.380 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:44.380 Configuring doxy-api-html.conf using configuration 00:01:44.380 Configuring doxy-api-man.conf using configuration 00:01:44.380 Program mandb found: YES (/usr/bin/mandb) 00:01:44.380 Program sphinx-build found: NO 00:01:44.380 Configuring rte_build_config.h using configuration 00:01:44.380 Message: 00:01:44.380 ================= 00:01:44.380 Applications Enabled 00:01:44.380 ================= 00:01:44.380 00:01:44.380 apps: 00:01:44.380 00:01:44.380 00:01:44.380 Message: 00:01:44.380 ================= 00:01:44.380 Libraries Enabled 00:01:44.380 ================= 00:01:44.380 00:01:44.380 libs: 00:01:44.380 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:44.380 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:44.380 cryptodev, dmadev, power, reorder, security, vhost, 00:01:44.380 00:01:44.380 Message: 00:01:44.380 =============== 00:01:44.380 Drivers Enabled 00:01:44.380 =============== 00:01:44.380 00:01:44.380 common: 00:01:44.380 00:01:44.380 bus: 00:01:44.380 pci, vdev, 00:01:44.380 mempool: 00:01:44.380 ring, 00:01:44.380 dma: 00:01:44.380 00:01:44.380 net: 00:01:44.380 00:01:44.380 crypto: 00:01:44.380 00:01:44.380 compress: 00:01:44.380 00:01:44.380 vdpa: 00:01:44.380 00:01:44.380 00:01:44.380 Message: 00:01:44.380 ================= 00:01:44.380 Content Skipped 00:01:44.380 ================= 00:01:44.380 00:01:44.380 apps: 00:01:44.380 dumpcap: explicitly disabled via build config 00:01:44.380 graph: explicitly disabled via build config 00:01:44.380 pdump: explicitly disabled via build config 00:01:44.380 proc-info: explicitly disabled via build config 00:01:44.380 test-acl: explicitly disabled via build config 00:01:44.380 test-bbdev: explicitly disabled via build config 00:01:44.380 test-cmdline: explicitly disabled via build config 00:01:44.380 test-compress-perf: explicitly disabled via build config 00:01:44.380 test-crypto-perf: explicitly disabled via build config 00:01:44.380 test-dma-perf: explicitly disabled via build config 00:01:44.380 test-eventdev: explicitly disabled via build config 00:01:44.380 test-fib: explicitly disabled via build config 00:01:44.380 test-flow-perf: explicitly disabled via build config 00:01:44.380 test-gpudev: explicitly disabled via build config 00:01:44.380 test-mldev: explicitly disabled via build config 00:01:44.380 test-pipeline: explicitly disabled via build config 00:01:44.380 test-pmd: explicitly disabled via build config 00:01:44.380 test-regex: explicitly disabled via build config 00:01:44.380 test-sad: explicitly disabled via build config 00:01:44.380 test-security-perf: explicitly disabled via build config 00:01:44.380 00:01:44.380 libs: 00:01:44.380 argparse: explicitly disabled via build config 00:01:44.380 metrics: explicitly disabled via build config 00:01:44.380 acl: explicitly disabled via build config 00:01:44.380 bbdev: explicitly disabled via build config 00:01:44.380 bitratestats: explicitly disabled via build config 00:01:44.380 bpf: explicitly disabled via build config 00:01:44.380 cfgfile: explicitly disabled via build config 00:01:44.380 distributor: explicitly disabled via build config 00:01:44.380 efd: explicitly disabled via build config 00:01:44.380 eventdev: explicitly disabled via build config 00:01:44.380 dispatcher: explicitly disabled via build config 00:01:44.380 gpudev: explicitly disabled via build config 00:01:44.380 gro: explicitly disabled via build config 00:01:44.380 gso: explicitly disabled via build config 00:01:44.380 ip_frag: explicitly disabled via build config 00:01:44.380 jobstats: explicitly disabled via build config 00:01:44.380 latencystats: explicitly disabled via build config 00:01:44.380 lpm: explicitly disabled via build config 00:01:44.380 member: explicitly disabled via build config 00:01:44.380 pcapng: explicitly disabled via build config 00:01:44.380 rawdev: explicitly disabled via build config 00:01:44.380 regexdev: explicitly disabled via build config 00:01:44.380 mldev: explicitly disabled via build config 00:01:44.380 rib: explicitly disabled via build config 00:01:44.380 sched: explicitly disabled via build config 00:01:44.380 stack: explicitly disabled via build config 00:01:44.380 ipsec: explicitly disabled via build config 00:01:44.380 pdcp: explicitly disabled via build config 00:01:44.380 fib: explicitly disabled via build config 00:01:44.380 port: explicitly disabled via build config 00:01:44.380 pdump: explicitly disabled via build config 00:01:44.380 table: explicitly disabled via build config 00:01:44.380 pipeline: explicitly disabled via build config 00:01:44.380 graph: explicitly disabled via build config 00:01:44.380 node: explicitly disabled via build config 00:01:44.380 00:01:44.380 drivers: 00:01:44.380 common/cpt: not in enabled drivers build config 00:01:44.380 common/dpaax: not in enabled drivers build config 00:01:44.380 common/iavf: not in enabled drivers build config 00:01:44.380 common/idpf: not in enabled drivers build config 00:01:44.380 common/ionic: not in enabled drivers build config 00:01:44.380 common/mvep: not in enabled drivers build config 00:01:44.380 common/octeontx: not in enabled drivers build config 00:01:44.380 bus/auxiliary: not in enabled drivers build config 00:01:44.380 bus/cdx: not in enabled drivers build config 00:01:44.380 bus/dpaa: not in enabled drivers build config 00:01:44.380 bus/fslmc: not in enabled drivers build config 00:01:44.380 bus/ifpga: not in enabled drivers build config 00:01:44.380 bus/platform: not in enabled drivers build config 00:01:44.380 bus/uacce: not in enabled drivers build config 00:01:44.380 bus/vmbus: not in enabled drivers build config 00:01:44.380 common/cnxk: not in enabled drivers build config 00:01:44.380 common/mlx5: not in enabled drivers build config 00:01:44.380 common/nfp: not in enabled drivers build config 00:01:44.380 common/nitrox: not in enabled drivers build config 00:01:44.380 common/qat: not in enabled drivers build config 00:01:44.380 common/sfc_efx: not in enabled drivers build config 00:01:44.380 mempool/bucket: not in enabled drivers build config 00:01:44.380 mempool/cnxk: not in enabled drivers build config 00:01:44.380 mempool/dpaa: not in enabled drivers build config 00:01:44.380 mempool/dpaa2: not in enabled drivers build config 00:01:44.380 mempool/octeontx: not in enabled drivers build config 00:01:44.380 mempool/stack: not in enabled drivers build config 00:01:44.380 dma/cnxk: not in enabled drivers build config 00:01:44.380 dma/dpaa: not in enabled drivers build config 00:01:44.380 dma/dpaa2: not in enabled drivers build config 00:01:44.380 dma/hisilicon: not in enabled drivers build config 00:01:44.380 dma/idxd: not in enabled drivers build config 00:01:44.380 dma/ioat: not in enabled drivers build config 00:01:44.380 dma/skeleton: not in enabled drivers build config 00:01:44.380 net/af_packet: not in enabled drivers build config 00:01:44.380 net/af_xdp: not in enabled drivers build config 00:01:44.380 net/ark: not in enabled drivers build config 00:01:44.380 net/atlantic: not in enabled drivers build config 00:01:44.380 net/avp: not in enabled drivers build config 00:01:44.380 net/axgbe: not in enabled drivers build config 00:01:44.380 net/bnx2x: not in enabled drivers build config 00:01:44.380 net/bnxt: not in enabled drivers build config 00:01:44.380 net/bonding: not in enabled drivers build config 00:01:44.380 net/cnxk: not in enabled drivers build config 00:01:44.380 net/cpfl: not in enabled drivers build config 00:01:44.380 net/cxgbe: not in enabled drivers build config 00:01:44.380 net/dpaa: not in enabled drivers build config 00:01:44.380 net/dpaa2: not in enabled drivers build config 00:01:44.380 net/e1000: not in enabled drivers build config 00:01:44.380 net/ena: not in enabled drivers build config 00:01:44.380 net/enetc: not in enabled drivers build config 00:01:44.380 net/enetfec: not in enabled drivers build config 00:01:44.380 net/enic: not in enabled drivers build config 00:01:44.380 net/failsafe: not in enabled drivers build config 00:01:44.380 net/fm10k: not in enabled drivers build config 00:01:44.380 net/gve: not in enabled drivers build config 00:01:44.380 net/hinic: not in enabled drivers build config 00:01:44.380 net/hns3: not in enabled drivers build config 00:01:44.380 net/i40e: not in enabled drivers build config 00:01:44.380 net/iavf: not in enabled drivers build config 00:01:44.380 net/ice: not in enabled drivers build config 00:01:44.380 net/idpf: not in enabled drivers build config 00:01:44.380 net/igc: not in enabled drivers build config 00:01:44.380 net/ionic: not in enabled drivers build config 00:01:44.380 net/ipn3ke: not in enabled drivers build config 00:01:44.380 net/ixgbe: not in enabled drivers build config 00:01:44.380 net/mana: not in enabled drivers build config 00:01:44.380 net/memif: not in enabled drivers build config 00:01:44.380 net/mlx4: not in enabled drivers build config 00:01:44.380 net/mlx5: not in enabled drivers build config 00:01:44.380 net/mvneta: not in enabled drivers build config 00:01:44.380 net/mvpp2: not in enabled drivers build config 00:01:44.380 net/netvsc: not in enabled drivers build config 00:01:44.380 net/nfb: not in enabled drivers build config 00:01:44.380 net/nfp: not in enabled drivers build config 00:01:44.380 net/ngbe: not in enabled drivers build config 00:01:44.380 net/null: not in enabled drivers build config 00:01:44.380 net/octeontx: not in enabled drivers build config 00:01:44.380 net/octeon_ep: not in enabled drivers build config 00:01:44.381 net/pcap: not in enabled drivers build config 00:01:44.381 net/pfe: not in enabled drivers build config 00:01:44.381 net/qede: not in enabled drivers build config 00:01:44.381 net/ring: not in enabled drivers build config 00:01:44.381 net/sfc: not in enabled drivers build config 00:01:44.381 net/softnic: not in enabled drivers build config 00:01:44.381 net/tap: not in enabled drivers build config 00:01:44.381 net/thunderx: not in enabled drivers build config 00:01:44.381 net/txgbe: not in enabled drivers build config 00:01:44.381 net/vdev_netvsc: not in enabled drivers build config 00:01:44.381 net/vhost: not in enabled drivers build config 00:01:44.381 net/virtio: not in enabled drivers build config 00:01:44.381 net/vmxnet3: not in enabled drivers build config 00:01:44.381 raw/*: missing internal dependency, "rawdev" 00:01:44.381 crypto/armv8: not in enabled drivers build config 00:01:44.381 crypto/bcmfs: not in enabled drivers build config 00:01:44.381 crypto/caam_jr: not in enabled drivers build config 00:01:44.381 crypto/ccp: not in enabled drivers build config 00:01:44.381 crypto/cnxk: not in enabled drivers build config 00:01:44.381 crypto/dpaa_sec: not in enabled drivers build config 00:01:44.381 crypto/dpaa2_sec: not in enabled drivers build config 00:01:44.381 crypto/ipsec_mb: not in enabled drivers build config 00:01:44.381 crypto/mlx5: not in enabled drivers build config 00:01:44.381 crypto/mvsam: not in enabled drivers build config 00:01:44.381 crypto/nitrox: not in enabled drivers build config 00:01:44.381 crypto/null: not in enabled drivers build config 00:01:44.381 crypto/octeontx: not in enabled drivers build config 00:01:44.381 crypto/openssl: not in enabled drivers build config 00:01:44.381 crypto/scheduler: not in enabled drivers build config 00:01:44.381 crypto/uadk: not in enabled drivers build config 00:01:44.381 crypto/virtio: not in enabled drivers build config 00:01:44.381 compress/isal: not in enabled drivers build config 00:01:44.381 compress/mlx5: not in enabled drivers build config 00:01:44.381 compress/nitrox: not in enabled drivers build config 00:01:44.381 compress/octeontx: not in enabled drivers build config 00:01:44.381 compress/zlib: not in enabled drivers build config 00:01:44.381 regex/*: missing internal dependency, "regexdev" 00:01:44.381 ml/*: missing internal dependency, "mldev" 00:01:44.381 vdpa/ifc: not in enabled drivers build config 00:01:44.381 vdpa/mlx5: not in enabled drivers build config 00:01:44.381 vdpa/nfp: not in enabled drivers build config 00:01:44.381 vdpa/sfc: not in enabled drivers build config 00:01:44.381 event/*: missing internal dependency, "eventdev" 00:01:44.381 baseband/*: missing internal dependency, "bbdev" 00:01:44.381 gpu/*: missing internal dependency, "gpudev" 00:01:44.381 00:01:44.381 00:01:44.381 Build targets in project: 84 00:01:44.381 00:01:44.381 DPDK 24.03.0 00:01:44.381 00:01:44.381 User defined options 00:01:44.381 buildtype : debug 00:01:44.381 default_library : shared 00:01:44.381 libdir : lib 00:01:44.381 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:44.381 b_sanitize : address 00:01:44.381 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:44.381 c_link_args : 00:01:44.381 cpu_instruction_set: native 00:01:44.381 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:44.381 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:44.381 enable_docs : false 00:01:44.381 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:44.381 enable_kmods : false 00:01:44.381 max_lcores : 128 00:01:44.381 tests : false 00:01:44.381 00:01:44.381 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.381 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:44.381 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:44.381 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:44.381 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:44.381 [4/267] Linking static target lib/librte_kvargs.a 00:01:44.381 [5/267] Linking static target lib/librte_log.a 00:01:44.381 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:44.642 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:44.642 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:44.902 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:44.902 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:44.902 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:44.902 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:44.902 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:44.902 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.902 [15/267] Linking static target lib/librte_telemetry.a 00:01:44.902 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:44.902 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:45.161 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:45.161 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:45.161 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.421 [21/267] Linking target lib/librte_log.so.24.1 00:01:45.421 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:45.421 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:45.421 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:45.421 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:45.421 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:45.421 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:45.421 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:45.682 [29/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.682 [30/267] Linking target lib/librte_kvargs.so.24.1 00:01:45.682 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:45.682 [32/267] Linking target lib/librte_telemetry.so.24.1 00:01:45.682 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:45.682 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:45.942 [35/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:45.942 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:45.942 [37/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:45.942 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.942 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:45.942 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:45.942 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:45.942 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:45.942 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:45.942 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:46.204 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:46.204 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:46.204 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:46.468 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:46.468 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:46.468 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:46.468 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:46.468 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:46.728 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:46.728 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:46.728 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:46.728 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:46.728 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.728 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.989 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:46.989 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:46.989 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:46.989 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:46.989 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:46.989 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:46.989 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:46.989 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.249 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.249 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:47.510 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:47.510 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.510 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:47.510 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.510 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.510 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.510 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.510 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.510 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.510 [78/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.510 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.772 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.772 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.772 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:47.772 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.034 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.034 [85/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.034 [86/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.034 [87/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.034 [88/267] Linking static target lib/librte_eal.a 00:01:48.034 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.034 [90/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.294 [91/267] Linking static target lib/librte_ring.a 00:01:48.294 [92/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.294 [93/267] Linking static target lib/librte_rcu.a 00:01:48.294 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.294 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.294 [96/267] Linking static target lib/librte_mempool.a 00:01:48.294 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.554 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.554 [99/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.554 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.811 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.811 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.811 [103/267] Linking static target lib/librte_mbuf.a 00:01:48.811 [104/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.811 [105/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.811 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.811 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:49.068 [108/267] Linking static target lib/librte_net.a 00:01:49.068 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:49.068 [110/267] Linking static target lib/librte_meter.a 00:01:49.068 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:49.068 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.328 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:49.328 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:49.328 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.328 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.588 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.588 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:49.848 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:49.848 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.848 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:49.848 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.848 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:50.108 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:50.108 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.108 [126/267] Linking static target lib/librte_pci.a 00:01:50.108 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.108 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:50.108 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:50.108 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:50.367 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:50.367 [132/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.367 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:50.367 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:50.367 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:50.367 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:50.367 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:50.367 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:50.367 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:50.367 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:50.367 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:50.626 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:50.626 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:50.626 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:50.626 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:50.626 [146/267] Linking static target lib/librte_cmdline.a 00:01:50.886 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:50.886 [148/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:50.886 [149/267] Linking static target lib/librte_timer.a 00:01:50.886 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:51.146 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.146 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.146 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:51.146 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:51.408 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:51.408 [156/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.668 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:51.668 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:51.668 [159/267] Linking static target lib/librte_compressdev.a 00:01:51.668 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:51.668 [161/267] Linking static target lib/librte_hash.a 00:01:51.668 [162/267] Linking static target lib/librte_ethdev.a 00:01:51.668 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.668 [164/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:51.668 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:51.927 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.927 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:51.927 [168/267] Linking static target lib/librte_dmadev.a 00:01:52.185 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.185 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.185 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.185 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.185 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.443 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.443 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.443 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.443 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.701 [178/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.701 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.701 [180/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.701 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.701 [182/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.701 [183/267] Linking static target lib/librte_cryptodev.a 00:01:52.701 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.701 [185/267] Linking static target lib/librte_power.a 00:01:52.959 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.959 [187/267] Linking static target lib/librte_reorder.a 00:01:53.216 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:53.216 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:53.216 [190/267] Linking static target lib/librte_security.a 00:01:53.216 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:53.216 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:53.216 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.475 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.734 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.734 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.734 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:53.734 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.992 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:53.992 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:54.250 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:54.250 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:54.250 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.250 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:54.508 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:54.508 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:54.508 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:54.508 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:54.508 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:54.765 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.765 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.765 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.765 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.765 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.766 [215/267] Linking static target drivers/librte_bus_vdev.a 00:01:54.766 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.766 [217/267] Linking static target drivers/librte_bus_pci.a 00:01:54.766 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.766 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:54.766 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:55.024 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:55.024 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.024 [223/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.024 [224/267] Linking static target drivers/librte_mempool_ring.a 00:01:55.024 [225/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.293 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.550 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:56.483 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.483 [229/267] Linking target lib/librte_eal.so.24.1 00:01:56.483 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:56.483 [231/267] Linking target lib/librte_ring.so.24.1 00:01:56.483 [232/267] Linking target lib/librte_dmadev.so.24.1 00:01:56.483 [233/267] Linking target lib/librte_meter.so.24.1 00:01:56.740 [234/267] Linking target lib/librte_pci.so.24.1 00:01:56.740 [235/267] Linking target lib/librte_timer.so.24.1 00:01:56.740 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:56.740 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:56.740 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:56.740 [239/267] Linking target lib/librte_rcu.so.24.1 00:01:56.740 [240/267] Linking target lib/librte_mempool.so.24.1 00:01:56.740 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:56.740 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:56.740 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:56.740 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:56.740 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:56.997 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:56.997 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:56.997 [248/267] Linking target lib/librte_mbuf.so.24.1 00:01:56.997 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:56.997 [250/267] Linking target lib/librte_net.so.24.1 00:01:56.997 [251/267] Linking target lib/librte_compressdev.so.24.1 00:01:56.997 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:01:56.997 [253/267] Linking target lib/librte_reorder.so.24.1 00:01:57.254 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:57.254 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:57.254 [256/267] Linking target lib/librte_cmdline.so.24.1 00:01:57.254 [257/267] Linking target lib/librte_hash.so.24.1 00:01:57.254 [258/267] Linking target lib/librte_security.so.24.1 00:01:57.254 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:57.511 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.511 [261/267] Linking target lib/librte_ethdev.so.24.1 00:01:57.511 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:57.768 [263/267] Linking target lib/librte_power.so.24.1 00:01:59.139 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:59.139 [265/267] Linking static target lib/librte_vhost.a 00:02:00.511 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.511 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:00.511 INFO: autodetecting backend as ninja 00:02:00.511 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:15.445 CC lib/ut/ut.o 00:02:15.445 CC lib/ut_mock/mock.o 00:02:15.445 CC lib/log/log_flags.o 00:02:15.445 CC lib/log/log.o 00:02:15.445 CC lib/log/log_deprecated.o 00:02:15.445 LIB libspdk_ut.a 00:02:15.445 LIB libspdk_ut_mock.a 00:02:15.445 SO libspdk_ut.so.2.0 00:02:15.445 SO libspdk_ut_mock.so.6.0 00:02:15.445 LIB libspdk_log.a 00:02:15.445 SO libspdk_log.so.7.0 00:02:15.445 SYMLINK libspdk_ut.so 00:02:15.445 SYMLINK libspdk_ut_mock.so 00:02:15.445 SYMLINK libspdk_log.so 00:02:15.445 CC lib/util/base64.o 00:02:15.445 CC lib/util/bit_array.o 00:02:15.445 CC lib/util/cpuset.o 00:02:15.445 CC lib/util/crc16.o 00:02:15.445 CC lib/util/crc32.o 00:02:15.445 CC lib/util/crc32c.o 00:02:15.445 CC lib/dma/dma.o 00:02:15.445 CXX lib/trace_parser/trace.o 00:02:15.445 CC lib/ioat/ioat.o 00:02:15.445 CC lib/vfio_user/host/vfio_user_pci.o 00:02:15.445 CC lib/util/crc32_ieee.o 00:02:15.445 CC lib/util/crc64.o 00:02:15.445 CC lib/util/dif.o 00:02:15.445 CC lib/util/fd.o 00:02:15.445 CC lib/util/fd_group.o 00:02:15.445 LIB libspdk_dma.a 00:02:15.445 CC lib/util/file.o 00:02:15.445 SO libspdk_dma.so.5.0 00:02:15.445 CC lib/util/hexlify.o 00:02:15.445 CC lib/util/iov.o 00:02:15.445 CC lib/util/math.o 00:02:15.445 SYMLINK libspdk_dma.so 00:02:15.445 CC lib/util/net.o 00:02:15.445 LIB libspdk_ioat.a 00:02:15.445 SO libspdk_ioat.so.7.0 00:02:15.445 CC lib/vfio_user/host/vfio_user.o 00:02:15.445 CC lib/util/pipe.o 00:02:15.445 CC lib/util/strerror_tls.o 00:02:15.445 CC lib/util/string.o 00:02:15.445 SYMLINK libspdk_ioat.so 00:02:15.445 CC lib/util/uuid.o 00:02:15.445 CC lib/util/xor.o 00:02:15.445 CC lib/util/zipf.o 00:02:15.445 CC lib/util/md5.o 00:02:15.445 LIB libspdk_vfio_user.a 00:02:15.445 SO libspdk_vfio_user.so.5.0 00:02:15.445 SYMLINK libspdk_vfio_user.so 00:02:15.445 LIB libspdk_util.a 00:02:15.445 SO libspdk_util.so.10.0 00:02:15.445 LIB libspdk_trace_parser.a 00:02:15.445 SO libspdk_trace_parser.so.6.0 00:02:15.445 SYMLINK libspdk_util.so 00:02:15.445 SYMLINK libspdk_trace_parser.so 00:02:15.445 CC lib/vmd/vmd.o 00:02:15.445 CC lib/vmd/led.o 00:02:15.445 CC lib/idxd/idxd.o 00:02:15.445 CC lib/idxd/idxd_user.o 00:02:15.445 CC lib/idxd/idxd_kernel.o 00:02:15.445 CC lib/conf/conf.o 00:02:15.445 CC lib/rdma_utils/rdma_utils.o 00:02:15.445 CC lib/rdma_provider/common.o 00:02:15.445 CC lib/json/json_parse.o 00:02:15.445 CC lib/env_dpdk/env.o 00:02:15.445 CC lib/env_dpdk/memory.o 00:02:15.445 CC lib/env_dpdk/pci.o 00:02:15.703 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:15.703 LIB libspdk_conf.a 00:02:15.703 CC lib/json/json_util.o 00:02:15.703 SO libspdk_conf.so.6.0 00:02:15.703 CC lib/env_dpdk/init.o 00:02:15.703 LIB libspdk_rdma_utils.a 00:02:15.703 SO libspdk_rdma_utils.so.1.0 00:02:15.703 SYMLINK libspdk_conf.so 00:02:15.703 CC lib/env_dpdk/threads.o 00:02:15.703 SYMLINK libspdk_rdma_utils.so 00:02:15.703 CC lib/env_dpdk/pci_ioat.o 00:02:15.703 LIB libspdk_rdma_provider.a 00:02:15.703 SO libspdk_rdma_provider.so.6.0 00:02:15.703 SYMLINK libspdk_rdma_provider.so 00:02:15.703 CC lib/env_dpdk/pci_virtio.o 00:02:15.960 CC lib/env_dpdk/pci_vmd.o 00:02:15.960 CC lib/env_dpdk/pci_idxd.o 00:02:15.960 CC lib/json/json_write.o 00:02:15.960 CC lib/env_dpdk/pci_event.o 00:02:15.960 LIB libspdk_idxd.a 00:02:15.960 CC lib/env_dpdk/sigbus_handler.o 00:02:15.960 CC lib/env_dpdk/pci_dpdk.o 00:02:15.960 SO libspdk_idxd.so.12.1 00:02:15.960 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:15.960 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:15.960 SYMLINK libspdk_idxd.so 00:02:15.960 LIB libspdk_vmd.a 00:02:16.218 SO libspdk_vmd.so.6.0 00:02:16.218 LIB libspdk_json.a 00:02:16.218 SYMLINK libspdk_vmd.so 00:02:16.218 SO libspdk_json.so.6.0 00:02:16.218 SYMLINK libspdk_json.so 00:02:16.512 CC lib/jsonrpc/jsonrpc_server.o 00:02:16.513 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:16.513 CC lib/jsonrpc/jsonrpc_client.o 00:02:16.513 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:16.793 LIB libspdk_jsonrpc.a 00:02:16.793 SO libspdk_jsonrpc.so.6.0 00:02:16.793 SYMLINK libspdk_jsonrpc.so 00:02:16.793 LIB libspdk_env_dpdk.a 00:02:16.793 SO libspdk_env_dpdk.so.15.0 00:02:17.051 SYMLINK libspdk_env_dpdk.so 00:02:17.051 CC lib/rpc/rpc.o 00:02:17.309 LIB libspdk_rpc.a 00:02:17.309 SO libspdk_rpc.so.6.0 00:02:17.309 SYMLINK libspdk_rpc.so 00:02:17.569 CC lib/notify/notify.o 00:02:17.569 CC lib/notify/notify_rpc.o 00:02:17.569 CC lib/keyring/keyring.o 00:02:17.569 CC lib/keyring/keyring_rpc.o 00:02:17.569 CC lib/trace/trace_flags.o 00:02:17.569 CC lib/trace/trace.o 00:02:17.569 CC lib/trace/trace_rpc.o 00:02:17.569 LIB libspdk_notify.a 00:02:17.569 SO libspdk_notify.so.6.0 00:02:17.828 SYMLINK libspdk_notify.so 00:02:17.828 LIB libspdk_keyring.a 00:02:17.828 SO libspdk_keyring.so.2.0 00:02:17.828 LIB libspdk_trace.a 00:02:17.828 SO libspdk_trace.so.11.0 00:02:17.828 SYMLINK libspdk_keyring.so 00:02:17.828 SYMLINK libspdk_trace.so 00:02:18.087 CC lib/thread/thread.o 00:02:18.087 CC lib/thread/iobuf.o 00:02:18.087 CC lib/sock/sock.o 00:02:18.087 CC lib/sock/sock_rpc.o 00:02:18.345 LIB libspdk_sock.a 00:02:18.345 SO libspdk_sock.so.10.0 00:02:18.602 SYMLINK libspdk_sock.so 00:02:18.858 CC lib/nvme/nvme_ns_cmd.o 00:02:18.858 CC lib/nvme/nvme_fabric.o 00:02:18.858 CC lib/nvme/nvme_ctrlr.o 00:02:18.858 CC lib/nvme/nvme.o 00:02:18.858 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:18.858 CC lib/nvme/nvme_pcie_common.o 00:02:18.858 CC lib/nvme/nvme_qpair.o 00:02:18.858 CC lib/nvme/nvme_pcie.o 00:02:18.858 CC lib/nvme/nvme_ns.o 00:02:19.423 CC lib/nvme/nvme_quirks.o 00:02:19.423 CC lib/nvme/nvme_transport.o 00:02:19.423 CC lib/nvme/nvme_discovery.o 00:02:19.423 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:19.423 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:19.423 LIB libspdk_thread.a 00:02:19.681 CC lib/nvme/nvme_tcp.o 00:02:19.681 SO libspdk_thread.so.10.1 00:02:19.681 CC lib/nvme/nvme_opal.o 00:02:19.681 CC lib/nvme/nvme_io_msg.o 00:02:19.681 SYMLINK libspdk_thread.so 00:02:19.681 CC lib/nvme/nvme_poll_group.o 00:02:19.681 CC lib/nvme/nvme_zns.o 00:02:19.681 CC lib/nvme/nvme_stubs.o 00:02:19.940 CC lib/nvme/nvme_auth.o 00:02:19.940 CC lib/nvme/nvme_cuse.o 00:02:19.940 CC lib/nvme/nvme_rdma.o 00:02:20.199 CC lib/accel/accel.o 00:02:20.199 CC lib/accel/accel_rpc.o 00:02:20.199 CC lib/blob/blobstore.o 00:02:20.199 CC lib/init/json_config.o 00:02:20.199 CC lib/init/subsystem.o 00:02:20.456 CC lib/blob/request.o 00:02:20.456 CC lib/blob/zeroes.o 00:02:20.456 CC lib/accel/accel_sw.o 00:02:20.456 CC lib/init/subsystem_rpc.o 00:02:20.715 CC lib/blob/blob_bs_dev.o 00:02:20.715 CC lib/init/rpc.o 00:02:20.715 CC lib/virtio/virtio.o 00:02:20.715 CC lib/virtio/virtio_vhost_user.o 00:02:20.715 CC lib/virtio/virtio_vfio_user.o 00:02:20.715 CC lib/virtio/virtio_pci.o 00:02:20.972 CC lib/fsdev/fsdev.o 00:02:20.972 LIB libspdk_init.a 00:02:20.972 SO libspdk_init.so.6.0 00:02:20.972 SYMLINK libspdk_init.so 00:02:20.972 CC lib/fsdev/fsdev_io.o 00:02:20.972 CC lib/fsdev/fsdev_rpc.o 00:02:21.231 LIB libspdk_virtio.a 00:02:21.231 CC lib/event/reactor.o 00:02:21.231 SO libspdk_virtio.so.7.0 00:02:21.231 CC lib/event/app.o 00:02:21.231 CC lib/event/log_rpc.o 00:02:21.231 CC lib/event/app_rpc.o 00:02:21.231 SYMLINK libspdk_virtio.so 00:02:21.231 CC lib/event/scheduler_static.o 00:02:21.231 LIB libspdk_accel.a 00:02:21.489 LIB libspdk_fsdev.a 00:02:21.489 SO libspdk_accel.so.16.0 00:02:21.489 SO libspdk_fsdev.so.1.0 00:02:21.489 SYMLINK libspdk_accel.so 00:02:21.489 LIB libspdk_nvme.a 00:02:21.489 SYMLINK libspdk_fsdev.so 00:02:21.489 LIB libspdk_event.a 00:02:21.489 CC lib/bdev/bdev.o 00:02:21.489 CC lib/bdev/bdev_rpc.o 00:02:21.489 CC lib/bdev/bdev_zone.o 00:02:21.489 CC lib/bdev/part.o 00:02:21.489 CC lib/bdev/scsi_nvme.o 00:02:21.489 SO libspdk_nvme.so.14.0 00:02:21.747 SO libspdk_event.so.14.0 00:02:21.747 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:21.747 SYMLINK libspdk_event.so 00:02:21.747 SYMLINK libspdk_nvme.so 00:02:22.314 LIB libspdk_fuse_dispatcher.a 00:02:22.314 SO libspdk_fuse_dispatcher.so.1.0 00:02:22.573 SYMLINK libspdk_fuse_dispatcher.so 00:02:23.505 LIB libspdk_blob.a 00:02:23.505 SO libspdk_blob.so.11.0 00:02:23.505 SYMLINK libspdk_blob.so 00:02:23.763 CC lib/blobfs/blobfs.o 00:02:23.763 CC lib/blobfs/tree.o 00:02:23.763 CC lib/lvol/lvol.o 00:02:24.328 LIB libspdk_bdev.a 00:02:24.586 SO libspdk_bdev.so.16.0 00:02:24.586 SYMLINK libspdk_bdev.so 00:02:24.843 LIB libspdk_blobfs.a 00:02:24.843 CC lib/nbd/nbd.o 00:02:24.843 CC lib/nbd/nbd_rpc.o 00:02:24.843 CC lib/ublk/ublk.o 00:02:24.843 CC lib/ublk/ublk_rpc.o 00:02:24.843 CC lib/scsi/dev.o 00:02:24.843 CC lib/scsi/lun.o 00:02:24.843 CC lib/nvmf/ctrlr.o 00:02:24.843 SO libspdk_blobfs.so.10.0 00:02:24.843 CC lib/ftl/ftl_core.o 00:02:24.843 LIB libspdk_lvol.a 00:02:24.843 SO libspdk_lvol.so.10.0 00:02:24.843 SYMLINK libspdk_blobfs.so 00:02:24.843 CC lib/nvmf/ctrlr_discovery.o 00:02:24.843 SYMLINK libspdk_lvol.so 00:02:24.843 CC lib/nvmf/ctrlr_bdev.o 00:02:24.843 CC lib/nvmf/subsystem.o 00:02:24.843 CC lib/nvmf/nvmf.o 00:02:25.100 CC lib/ftl/ftl_init.o 00:02:25.100 CC lib/scsi/port.o 00:02:25.100 CC lib/ftl/ftl_layout.o 00:02:25.100 LIB libspdk_nbd.a 00:02:25.100 CC lib/scsi/scsi.o 00:02:25.100 SO libspdk_nbd.so.7.0 00:02:25.100 CC lib/ftl/ftl_debug.o 00:02:25.359 SYMLINK libspdk_nbd.so 00:02:25.359 CC lib/ftl/ftl_io.o 00:02:25.359 CC lib/ftl/ftl_sb.o 00:02:25.359 CC lib/scsi/scsi_bdev.o 00:02:25.359 CC lib/ftl/ftl_l2p.o 00:02:25.359 CC lib/nvmf/nvmf_rpc.o 00:02:25.359 LIB libspdk_ublk.a 00:02:25.359 SO libspdk_ublk.so.3.0 00:02:25.359 CC lib/nvmf/transport.o 00:02:25.359 CC lib/nvmf/tcp.o 00:02:25.359 CC lib/ftl/ftl_l2p_flat.o 00:02:25.617 SYMLINK libspdk_ublk.so 00:02:25.617 CC lib/ftl/ftl_nv_cache.o 00:02:25.617 CC lib/nvmf/stubs.o 00:02:25.617 CC lib/nvmf/mdns_server.o 00:02:25.874 CC lib/scsi/scsi_pr.o 00:02:25.874 CC lib/nvmf/rdma.o 00:02:25.874 CC lib/nvmf/auth.o 00:02:26.132 CC lib/scsi/scsi_rpc.o 00:02:26.132 CC lib/scsi/task.o 00:02:26.132 CC lib/ftl/ftl_band.o 00:02:26.132 CC lib/ftl/ftl_band_ops.o 00:02:26.132 CC lib/ftl/ftl_writer.o 00:02:26.391 CC lib/ftl/ftl_rq.o 00:02:26.391 LIB libspdk_scsi.a 00:02:26.391 SO libspdk_scsi.so.9.0 00:02:26.391 CC lib/ftl/ftl_reloc.o 00:02:26.391 CC lib/ftl/ftl_l2p_cache.o 00:02:26.391 SYMLINK libspdk_scsi.so 00:02:26.391 CC lib/ftl/ftl_p2l.o 00:02:26.391 CC lib/ftl/ftl_p2l_log.o 00:02:26.391 CC lib/ftl/mngt/ftl_mngt.o 00:02:26.649 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:26.649 CC lib/iscsi/conn.o 00:02:26.649 CC lib/iscsi/init_grp.o 00:02:26.649 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:26.649 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:26.906 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:26.906 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:26.906 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:26.906 CC lib/iscsi/iscsi.o 00:02:26.906 CC lib/vhost/vhost.o 00:02:26.906 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:26.906 CC lib/iscsi/param.o 00:02:26.906 CC lib/iscsi/portal_grp.o 00:02:26.906 CC lib/iscsi/tgt_node.o 00:02:27.164 CC lib/vhost/vhost_rpc.o 00:02:27.164 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:27.164 CC lib/vhost/vhost_scsi.o 00:02:27.164 CC lib/iscsi/iscsi_subsystem.o 00:02:27.164 CC lib/iscsi/iscsi_rpc.o 00:02:27.164 CC lib/iscsi/task.o 00:02:27.423 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:27.423 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:27.423 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:27.423 CC lib/vhost/vhost_blk.o 00:02:27.680 CC lib/vhost/rte_vhost_user.o 00:02:27.680 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:27.680 CC lib/ftl/utils/ftl_conf.o 00:02:27.680 CC lib/ftl/utils/ftl_md.o 00:02:27.680 CC lib/ftl/utils/ftl_mempool.o 00:02:27.680 CC lib/ftl/utils/ftl_bitmap.o 00:02:27.680 CC lib/ftl/utils/ftl_property.o 00:02:27.938 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:27.938 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:27.938 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:27.938 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:27.938 LIB libspdk_nvmf.a 00:02:27.938 LIB libspdk_iscsi.a 00:02:27.938 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:27.938 SO libspdk_iscsi.so.8.0 00:02:27.938 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:28.195 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:28.195 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:28.195 SO libspdk_nvmf.so.19.0 00:02:28.195 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:28.195 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:28.195 SYMLINK libspdk_iscsi.so 00:02:28.195 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:28.195 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:28.195 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:28.195 CC lib/ftl/base/ftl_base_dev.o 00:02:28.195 CC lib/ftl/base/ftl_base_bdev.o 00:02:28.195 SYMLINK libspdk_nvmf.so 00:02:28.195 CC lib/ftl/ftl_trace.o 00:02:28.454 LIB libspdk_vhost.a 00:02:28.454 SO libspdk_vhost.so.8.0 00:02:28.454 SYMLINK libspdk_vhost.so 00:02:28.454 LIB libspdk_ftl.a 00:02:28.712 SO libspdk_ftl.so.9.0 00:02:28.970 SYMLINK libspdk_ftl.so 00:02:29.228 CC module/env_dpdk/env_dpdk_rpc.o 00:02:29.228 CC module/blob/bdev/blob_bdev.o 00:02:29.228 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:29.228 CC module/keyring/linux/keyring.o 00:02:29.228 CC module/accel/error/accel_error.o 00:02:29.228 CC module/keyring/file/keyring.o 00:02:29.228 CC module/fsdev/aio/fsdev_aio.o 00:02:29.228 CC module/accel/ioat/accel_ioat.o 00:02:29.228 CC module/sock/posix/posix.o 00:02:29.228 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:29.228 LIB libspdk_env_dpdk_rpc.a 00:02:29.228 SO libspdk_env_dpdk_rpc.so.6.0 00:02:29.486 SYMLINK libspdk_env_dpdk_rpc.so 00:02:29.486 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:29.486 CC module/keyring/linux/keyring_rpc.o 00:02:29.486 CC module/keyring/file/keyring_rpc.o 00:02:29.486 LIB libspdk_scheduler_dpdk_governor.a 00:02:29.486 CC module/accel/ioat/accel_ioat_rpc.o 00:02:29.486 CC module/accel/error/accel_error_rpc.o 00:02:29.486 LIB libspdk_scheduler_dynamic.a 00:02:29.486 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:29.486 LIB libspdk_blob_bdev.a 00:02:29.486 SO libspdk_scheduler_dynamic.so.4.0 00:02:29.486 SO libspdk_blob_bdev.so.11.0 00:02:29.486 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:29.486 CC module/fsdev/aio/linux_aio_mgr.o 00:02:29.486 LIB libspdk_keyring_linux.a 00:02:29.486 SYMLINK libspdk_blob_bdev.so 00:02:29.486 SO libspdk_keyring_linux.so.1.0 00:02:29.486 LIB libspdk_keyring_file.a 00:02:29.486 SYMLINK libspdk_scheduler_dynamic.so 00:02:29.486 SO libspdk_keyring_file.so.2.0 00:02:29.486 LIB libspdk_accel_error.a 00:02:29.486 LIB libspdk_accel_ioat.a 00:02:29.486 SO libspdk_accel_ioat.so.6.0 00:02:29.486 SYMLINK libspdk_keyring_linux.so 00:02:29.486 SO libspdk_accel_error.so.2.0 00:02:29.486 SYMLINK libspdk_keyring_file.so 00:02:29.776 SYMLINK libspdk_accel_error.so 00:02:29.776 SYMLINK libspdk_accel_ioat.so 00:02:29.776 CC module/scheduler/gscheduler/gscheduler.o 00:02:29.776 CC module/accel/dsa/accel_dsa.o 00:02:29.776 CC module/bdev/error/vbdev_error.o 00:02:29.776 CC module/bdev/delay/vbdev_delay.o 00:02:29.776 CC module/blobfs/bdev/blobfs_bdev.o 00:02:29.776 CC module/accel/iaa/accel_iaa.o 00:02:29.776 CC module/bdev/gpt/gpt.o 00:02:29.776 LIB libspdk_scheduler_gscheduler.a 00:02:29.776 CC module/bdev/lvol/vbdev_lvol.o 00:02:29.776 SO libspdk_scheduler_gscheduler.so.4.0 00:02:29.776 LIB libspdk_sock_posix.a 00:02:29.776 SYMLINK libspdk_scheduler_gscheduler.so 00:02:29.776 CC module/accel/iaa/accel_iaa_rpc.o 00:02:29.776 SO libspdk_sock_posix.so.6.0 00:02:30.034 LIB libspdk_fsdev_aio.a 00:02:30.034 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:30.035 SYMLINK libspdk_sock_posix.so 00:02:30.035 SO libspdk_fsdev_aio.so.1.0 00:02:30.035 CC module/bdev/gpt/vbdev_gpt.o 00:02:30.035 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:30.035 LIB libspdk_accel_iaa.a 00:02:30.035 SO libspdk_accel_iaa.so.3.0 00:02:30.035 SYMLINK libspdk_fsdev_aio.so 00:02:30.035 CC module/bdev/error/vbdev_error_rpc.o 00:02:30.035 CC module/accel/dsa/accel_dsa_rpc.o 00:02:30.035 SYMLINK libspdk_accel_iaa.so 00:02:30.035 CC module/bdev/malloc/bdev_malloc.o 00:02:30.035 LIB libspdk_blobfs_bdev.a 00:02:30.035 SO libspdk_blobfs_bdev.so.6.0 00:02:30.035 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:30.035 LIB libspdk_accel_dsa.a 00:02:30.035 CC module/bdev/null/bdev_null.o 00:02:30.293 LIB libspdk_bdev_error.a 00:02:30.294 SO libspdk_accel_dsa.so.5.0 00:02:30.294 CC module/bdev/nvme/bdev_nvme.o 00:02:30.294 SYMLINK libspdk_blobfs_bdev.so 00:02:30.294 SO libspdk_bdev_error.so.6.0 00:02:30.294 CC module/bdev/null/bdev_null_rpc.o 00:02:30.294 SYMLINK libspdk_accel_dsa.so 00:02:30.294 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:30.294 LIB libspdk_bdev_gpt.a 00:02:30.294 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:30.294 SYMLINK libspdk_bdev_error.so 00:02:30.294 SO libspdk_bdev_gpt.so.6.0 00:02:30.294 LIB libspdk_bdev_delay.a 00:02:30.294 SYMLINK libspdk_bdev_gpt.so 00:02:30.294 SO libspdk_bdev_delay.so.6.0 00:02:30.294 LIB libspdk_bdev_lvol.a 00:02:30.294 SO libspdk_bdev_lvol.so.6.0 00:02:30.294 CC module/bdev/passthru/vbdev_passthru.o 00:02:30.294 CC module/bdev/nvme/nvme_rpc.o 00:02:30.294 SYMLINK libspdk_bdev_delay.so 00:02:30.294 LIB libspdk_bdev_null.a 00:02:30.294 SYMLINK libspdk_bdev_lvol.so 00:02:30.552 SO libspdk_bdev_null.so.6.0 00:02:30.552 CC module/bdev/raid/bdev_raid.o 00:02:30.552 CC module/bdev/split/vbdev_split.o 00:02:30.552 LIB libspdk_bdev_malloc.a 00:02:30.552 SYMLINK libspdk_bdev_null.so 00:02:30.552 SO libspdk_bdev_malloc.so.6.0 00:02:30.552 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:30.552 CC module/bdev/aio/bdev_aio.o 00:02:30.552 SYMLINK libspdk_bdev_malloc.so 00:02:30.552 CC module/bdev/raid/bdev_raid_rpc.o 00:02:30.552 CC module/bdev/raid/bdev_raid_sb.o 00:02:30.552 CC module/bdev/ftl/bdev_ftl.o 00:02:30.552 CC module/bdev/split/vbdev_split_rpc.o 00:02:30.552 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:30.810 CC module/bdev/raid/raid0.o 00:02:30.810 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:30.810 LIB libspdk_bdev_split.a 00:02:30.810 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:30.810 SO libspdk_bdev_split.so.6.0 00:02:30.810 LIB libspdk_bdev_passthru.a 00:02:30.810 SO libspdk_bdev_passthru.so.6.0 00:02:30.810 SYMLINK libspdk_bdev_split.so 00:02:30.810 CC module/bdev/raid/raid1.o 00:02:30.810 CC module/bdev/raid/concat.o 00:02:30.810 CC module/bdev/aio/bdev_aio_rpc.o 00:02:30.810 SYMLINK libspdk_bdev_passthru.so 00:02:30.810 CC module/bdev/raid/raid5f.o 00:02:30.810 LIB libspdk_bdev_zone_block.a 00:02:31.068 SO libspdk_bdev_zone_block.so.6.0 00:02:31.068 LIB libspdk_bdev_ftl.a 00:02:31.068 SO libspdk_bdev_ftl.so.6.0 00:02:31.068 SYMLINK libspdk_bdev_zone_block.so 00:02:31.068 LIB libspdk_bdev_aio.a 00:02:31.068 CC module/bdev/nvme/bdev_mdns_client.o 00:02:31.068 SO libspdk_bdev_aio.so.6.0 00:02:31.068 SYMLINK libspdk_bdev_ftl.so 00:02:31.068 CC module/bdev/iscsi/bdev_iscsi.o 00:02:31.068 CC module/bdev/nvme/vbdev_opal.o 00:02:31.068 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:31.068 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:31.068 SYMLINK libspdk_bdev_aio.so 00:02:31.068 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:31.068 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:31.068 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:31.326 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:31.326 LIB libspdk_bdev_iscsi.a 00:02:31.583 SO libspdk_bdev_iscsi.so.6.0 00:02:31.583 LIB libspdk_bdev_raid.a 00:02:31.583 SYMLINK libspdk_bdev_iscsi.so 00:02:31.583 SO libspdk_bdev_raid.so.6.0 00:02:31.583 SYMLINK libspdk_bdev_raid.so 00:02:31.583 LIB libspdk_bdev_virtio.a 00:02:31.583 SO libspdk_bdev_virtio.so.6.0 00:02:31.841 SYMLINK libspdk_bdev_virtio.so 00:02:32.473 LIB libspdk_bdev_nvme.a 00:02:32.731 SO libspdk_bdev_nvme.so.7.0 00:02:32.731 SYMLINK libspdk_bdev_nvme.so 00:02:32.989 CC module/event/subsystems/scheduler/scheduler.o 00:02:32.989 CC module/event/subsystems/vmd/vmd.o 00:02:32.989 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:32.989 CC module/event/subsystems/iobuf/iobuf.o 00:02:32.989 CC module/event/subsystems/fsdev/fsdev.o 00:02:32.989 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:32.989 CC module/event/subsystems/keyring/keyring.o 00:02:32.989 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:33.247 CC module/event/subsystems/sock/sock.o 00:02:33.247 LIB libspdk_event_keyring.a 00:02:33.247 LIB libspdk_event_scheduler.a 00:02:33.247 LIB libspdk_event_fsdev.a 00:02:33.247 LIB libspdk_event_vmd.a 00:02:33.247 LIB libspdk_event_iobuf.a 00:02:33.247 LIB libspdk_event_sock.a 00:02:33.247 SO libspdk_event_scheduler.so.4.0 00:02:33.247 SO libspdk_event_keyring.so.1.0 00:02:33.247 SO libspdk_event_fsdev.so.1.0 00:02:33.247 LIB libspdk_event_vhost_blk.a 00:02:33.247 SO libspdk_event_sock.so.5.0 00:02:33.247 SO libspdk_event_vmd.so.6.0 00:02:33.247 SO libspdk_event_iobuf.so.3.0 00:02:33.247 SO libspdk_event_vhost_blk.so.3.0 00:02:33.247 SYMLINK libspdk_event_fsdev.so 00:02:33.247 SYMLINK libspdk_event_keyring.so 00:02:33.247 SYMLINK libspdk_event_scheduler.so 00:02:33.247 SYMLINK libspdk_event_vmd.so 00:02:33.247 SYMLINK libspdk_event_sock.so 00:02:33.247 SYMLINK libspdk_event_iobuf.so 00:02:33.247 SYMLINK libspdk_event_vhost_blk.so 00:02:33.505 CC module/event/subsystems/accel/accel.o 00:02:33.763 LIB libspdk_event_accel.a 00:02:33.763 SO libspdk_event_accel.so.6.0 00:02:33.763 SYMLINK libspdk_event_accel.so 00:02:34.022 CC module/event/subsystems/bdev/bdev.o 00:02:34.022 LIB libspdk_event_bdev.a 00:02:34.022 SO libspdk_event_bdev.so.6.0 00:02:34.280 SYMLINK libspdk_event_bdev.so 00:02:34.280 CC module/event/subsystems/ublk/ublk.o 00:02:34.280 CC module/event/subsystems/scsi/scsi.o 00:02:34.280 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:34.280 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:34.280 CC module/event/subsystems/nbd/nbd.o 00:02:34.538 LIB libspdk_event_ublk.a 00:02:34.538 LIB libspdk_event_nbd.a 00:02:34.538 LIB libspdk_event_scsi.a 00:02:34.538 SO libspdk_event_ublk.so.3.0 00:02:34.538 SO libspdk_event_nbd.so.6.0 00:02:34.538 SO libspdk_event_scsi.so.6.0 00:02:34.538 SYMLINK libspdk_event_nbd.so 00:02:34.538 SYMLINK libspdk_event_ublk.so 00:02:34.538 SYMLINK libspdk_event_scsi.so 00:02:34.538 LIB libspdk_event_nvmf.a 00:02:34.538 SO libspdk_event_nvmf.so.6.0 00:02:34.538 SYMLINK libspdk_event_nvmf.so 00:02:34.797 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:34.797 CC module/event/subsystems/iscsi/iscsi.o 00:02:34.797 LIB libspdk_event_vhost_scsi.a 00:02:34.797 SO libspdk_event_vhost_scsi.so.3.0 00:02:34.797 LIB libspdk_event_iscsi.a 00:02:34.797 SO libspdk_event_iscsi.so.6.0 00:02:34.797 SYMLINK libspdk_event_vhost_scsi.so 00:02:35.055 SYMLINK libspdk_event_iscsi.so 00:02:35.055 SO libspdk.so.6.0 00:02:35.055 SYMLINK libspdk.so 00:02:35.313 CXX app/trace/trace.o 00:02:35.313 CC app/trace_record/trace_record.o 00:02:35.313 TEST_HEADER include/spdk/accel.h 00:02:35.313 TEST_HEADER include/spdk/accel_module.h 00:02:35.313 TEST_HEADER include/spdk/assert.h 00:02:35.313 TEST_HEADER include/spdk/barrier.h 00:02:35.313 TEST_HEADER include/spdk/base64.h 00:02:35.313 TEST_HEADER include/spdk/bdev.h 00:02:35.313 TEST_HEADER include/spdk/bdev_module.h 00:02:35.313 TEST_HEADER include/spdk/bdev_zone.h 00:02:35.313 TEST_HEADER include/spdk/bit_array.h 00:02:35.313 TEST_HEADER include/spdk/bit_pool.h 00:02:35.313 TEST_HEADER include/spdk/blob_bdev.h 00:02:35.313 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:35.313 TEST_HEADER include/spdk/blobfs.h 00:02:35.313 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:35.313 TEST_HEADER include/spdk/blob.h 00:02:35.313 TEST_HEADER include/spdk/conf.h 00:02:35.313 TEST_HEADER include/spdk/config.h 00:02:35.313 TEST_HEADER include/spdk/cpuset.h 00:02:35.313 TEST_HEADER include/spdk/crc16.h 00:02:35.313 TEST_HEADER include/spdk/crc32.h 00:02:35.313 TEST_HEADER include/spdk/crc64.h 00:02:35.313 TEST_HEADER include/spdk/dif.h 00:02:35.313 TEST_HEADER include/spdk/dma.h 00:02:35.313 TEST_HEADER include/spdk/endian.h 00:02:35.313 TEST_HEADER include/spdk/env_dpdk.h 00:02:35.313 TEST_HEADER include/spdk/env.h 00:02:35.313 TEST_HEADER include/spdk/event.h 00:02:35.313 TEST_HEADER include/spdk/fd_group.h 00:02:35.313 TEST_HEADER include/spdk/fd.h 00:02:35.313 TEST_HEADER include/spdk/file.h 00:02:35.313 TEST_HEADER include/spdk/fsdev.h 00:02:35.313 TEST_HEADER include/spdk/fsdev_module.h 00:02:35.313 TEST_HEADER include/spdk/ftl.h 00:02:35.313 CC examples/util/zipf/zipf.o 00:02:35.313 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:35.313 TEST_HEADER include/spdk/gpt_spec.h 00:02:35.313 TEST_HEADER include/spdk/hexlify.h 00:02:35.313 CC test/thread/poller_perf/poller_perf.o 00:02:35.313 TEST_HEADER include/spdk/histogram_data.h 00:02:35.313 TEST_HEADER include/spdk/idxd.h 00:02:35.313 TEST_HEADER include/spdk/idxd_spec.h 00:02:35.313 TEST_HEADER include/spdk/init.h 00:02:35.313 CC examples/ioat/perf/perf.o 00:02:35.313 TEST_HEADER include/spdk/ioat.h 00:02:35.313 TEST_HEADER include/spdk/ioat_spec.h 00:02:35.313 TEST_HEADER include/spdk/iscsi_spec.h 00:02:35.313 TEST_HEADER include/spdk/json.h 00:02:35.313 TEST_HEADER include/spdk/jsonrpc.h 00:02:35.313 TEST_HEADER include/spdk/keyring.h 00:02:35.313 TEST_HEADER include/spdk/keyring_module.h 00:02:35.313 TEST_HEADER include/spdk/likely.h 00:02:35.313 TEST_HEADER include/spdk/log.h 00:02:35.313 TEST_HEADER include/spdk/lvol.h 00:02:35.313 TEST_HEADER include/spdk/md5.h 00:02:35.313 TEST_HEADER include/spdk/memory.h 00:02:35.313 TEST_HEADER include/spdk/mmio.h 00:02:35.313 TEST_HEADER include/spdk/nbd.h 00:02:35.313 TEST_HEADER include/spdk/net.h 00:02:35.313 TEST_HEADER include/spdk/notify.h 00:02:35.313 TEST_HEADER include/spdk/nvme.h 00:02:35.313 TEST_HEADER include/spdk/nvme_intel.h 00:02:35.313 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:35.313 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:35.313 CC test/app/bdev_svc/bdev_svc.o 00:02:35.313 TEST_HEADER include/spdk/nvme_spec.h 00:02:35.313 TEST_HEADER include/spdk/nvme_zns.h 00:02:35.313 CC test/dma/test_dma/test_dma.o 00:02:35.313 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:35.313 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:35.313 TEST_HEADER include/spdk/nvmf.h 00:02:35.313 TEST_HEADER include/spdk/nvmf_spec.h 00:02:35.313 CC test/env/mem_callbacks/mem_callbacks.o 00:02:35.313 TEST_HEADER include/spdk/nvmf_transport.h 00:02:35.313 TEST_HEADER include/spdk/opal.h 00:02:35.313 TEST_HEADER include/spdk/opal_spec.h 00:02:35.313 TEST_HEADER include/spdk/pci_ids.h 00:02:35.313 TEST_HEADER include/spdk/pipe.h 00:02:35.313 TEST_HEADER include/spdk/queue.h 00:02:35.313 TEST_HEADER include/spdk/reduce.h 00:02:35.313 TEST_HEADER include/spdk/rpc.h 00:02:35.313 TEST_HEADER include/spdk/scheduler.h 00:02:35.313 TEST_HEADER include/spdk/scsi.h 00:02:35.313 TEST_HEADER include/spdk/scsi_spec.h 00:02:35.314 TEST_HEADER include/spdk/sock.h 00:02:35.314 TEST_HEADER include/spdk/stdinc.h 00:02:35.314 TEST_HEADER include/spdk/string.h 00:02:35.314 TEST_HEADER include/spdk/thread.h 00:02:35.314 TEST_HEADER include/spdk/trace.h 00:02:35.314 TEST_HEADER include/spdk/trace_parser.h 00:02:35.314 TEST_HEADER include/spdk/tree.h 00:02:35.314 TEST_HEADER include/spdk/ublk.h 00:02:35.314 TEST_HEADER include/spdk/util.h 00:02:35.314 TEST_HEADER include/spdk/uuid.h 00:02:35.314 TEST_HEADER include/spdk/version.h 00:02:35.314 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:35.314 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:35.314 TEST_HEADER include/spdk/vhost.h 00:02:35.314 TEST_HEADER include/spdk/vmd.h 00:02:35.314 TEST_HEADER include/spdk/xor.h 00:02:35.314 TEST_HEADER include/spdk/zipf.h 00:02:35.314 CXX test/cpp_headers/accel.o 00:02:35.571 LINK interrupt_tgt 00:02:35.571 LINK poller_perf 00:02:35.571 LINK zipf 00:02:35.571 LINK spdk_trace_record 00:02:35.571 LINK bdev_svc 00:02:35.571 LINK ioat_perf 00:02:35.571 CXX test/cpp_headers/accel_module.o 00:02:35.571 LINK spdk_trace 00:02:35.571 CC test/app/histogram_perf/histogram_perf.o 00:02:35.571 CC test/env/vtophys/vtophys.o 00:02:35.571 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:35.571 CXX test/cpp_headers/assert.o 00:02:35.829 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:35.829 CC examples/ioat/verify/verify.o 00:02:35.829 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:35.829 LINK histogram_perf 00:02:35.829 LINK vtophys 00:02:35.829 LINK env_dpdk_post_init 00:02:35.829 CXX test/cpp_headers/barrier.o 00:02:35.829 CC app/nvmf_tgt/nvmf_main.o 00:02:35.829 LINK test_dma 00:02:35.829 LINK mem_callbacks 00:02:35.829 LINK verify 00:02:36.089 CXX test/cpp_headers/base64.o 00:02:36.089 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:36.089 LINK nvmf_tgt 00:02:36.089 CC app/iscsi_tgt/iscsi_tgt.o 00:02:36.089 CC test/env/memory/memory_ut.o 00:02:36.089 CC app/spdk_tgt/spdk_tgt.o 00:02:36.089 CC test/env/pci/pci_ut.o 00:02:36.089 CXX test/cpp_headers/bdev.o 00:02:36.089 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:36.089 LINK nvme_fuzz 00:02:36.089 CXX test/cpp_headers/bdev_module.o 00:02:36.089 CC examples/thread/thread/thread_ex.o 00:02:36.089 LINK iscsi_tgt 00:02:36.372 CXX test/cpp_headers/bdev_zone.o 00:02:36.372 LINK spdk_tgt 00:02:36.372 CXX test/cpp_headers/bit_array.o 00:02:36.372 CC app/spdk_lspci/spdk_lspci.o 00:02:36.372 CXX test/cpp_headers/bit_pool.o 00:02:36.372 LINK pci_ut 00:02:36.372 LINK thread 00:02:36.372 CXX test/cpp_headers/blob_bdev.o 00:02:36.372 CC test/rpc_client/rpc_client_test.o 00:02:36.372 LINK spdk_lspci 00:02:36.372 CXX test/cpp_headers/blobfs_bdev.o 00:02:36.629 LINK vhost_fuzz 00:02:36.629 CC test/app/jsoncat/jsoncat.o 00:02:36.629 LINK rpc_client_test 00:02:36.629 CXX test/cpp_headers/blobfs.o 00:02:36.629 LINK jsoncat 00:02:36.629 CC app/spdk_nvme_identify/identify.o 00:02:36.629 CC app/spdk_nvme_perf/perf.o 00:02:36.629 CC examples/sock/hello_world/hello_sock.o 00:02:36.887 CC examples/vmd/lsvmd/lsvmd.o 00:02:36.887 CXX test/cpp_headers/blob.o 00:02:36.887 CC examples/idxd/perf/perf.o 00:02:36.887 CC examples/vmd/led/led.o 00:02:36.887 LINK lsvmd 00:02:36.887 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:36.887 CXX test/cpp_headers/conf.o 00:02:36.887 LINK hello_sock 00:02:37.144 LINK led 00:02:37.144 CXX test/cpp_headers/config.o 00:02:37.144 CXX test/cpp_headers/cpuset.o 00:02:37.144 LINK hello_fsdev 00:02:37.144 LINK memory_ut 00:02:37.144 LINK idxd_perf 00:02:37.144 CXX test/cpp_headers/crc16.o 00:02:37.144 CC examples/accel/perf/accel_perf.o 00:02:37.402 CC examples/blob/hello_world/hello_blob.o 00:02:37.402 CXX test/cpp_headers/crc32.o 00:02:37.402 CC examples/blob/cli/blobcli.o 00:02:37.402 CC examples/nvme/hello_world/hello_world.o 00:02:37.402 LINK iscsi_fuzz 00:02:37.402 CXX test/cpp_headers/crc64.o 00:02:37.402 CC test/accel/dif/dif.o 00:02:37.402 CC test/blobfs/mkfs/mkfs.o 00:02:37.402 LINK hello_blob 00:02:37.402 LINK spdk_nvme_identify 00:02:37.660 LINK spdk_nvme_perf 00:02:37.660 CXX test/cpp_headers/dif.o 00:02:37.660 LINK hello_world 00:02:37.660 LINK mkfs 00:02:37.660 CC test/app/stub/stub.o 00:02:37.660 LINK accel_perf 00:02:37.660 CC examples/nvme/reconnect/reconnect.o 00:02:37.660 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:37.660 CXX test/cpp_headers/dma.o 00:02:37.917 CXX test/cpp_headers/endian.o 00:02:37.917 CC app/spdk_nvme_discover/discovery_aer.o 00:02:37.917 LINK stub 00:02:37.917 LINK blobcli 00:02:37.917 CC test/event/event_perf/event_perf.o 00:02:37.917 CXX test/cpp_headers/env_dpdk.o 00:02:37.917 CC test/event/reactor/reactor.o 00:02:37.917 LINK spdk_nvme_discover 00:02:37.917 CC test/lvol/esnap/esnap.o 00:02:37.917 CC test/event/reactor_perf/reactor_perf.o 00:02:37.917 LINK event_perf 00:02:38.175 LINK reconnect 00:02:38.175 CC test/event/app_repeat/app_repeat.o 00:02:38.175 CXX test/cpp_headers/env.o 00:02:38.175 LINK reactor 00:02:38.175 LINK reactor_perf 00:02:38.175 LINK dif 00:02:38.175 CC app/spdk_top/spdk_top.o 00:02:38.175 LINK app_repeat 00:02:38.175 CXX test/cpp_headers/event.o 00:02:38.175 LINK nvme_manage 00:02:38.175 CC test/nvme/aer/aer.o 00:02:38.472 CC test/event/scheduler/scheduler.o 00:02:38.472 CXX test/cpp_headers/fd_group.o 00:02:38.472 CXX test/cpp_headers/fd.o 00:02:38.472 CC examples/bdev/hello_world/hello_bdev.o 00:02:38.472 CC examples/bdev/bdevperf/bdevperf.o 00:02:38.472 CC test/nvme/reset/reset.o 00:02:38.472 CC examples/nvme/arbitration/arbitration.o 00:02:38.472 CXX test/cpp_headers/file.o 00:02:38.472 LINK scheduler 00:02:38.472 LINK aer 00:02:38.731 LINK hello_bdev 00:02:38.731 CXX test/cpp_headers/fsdev.o 00:02:38.731 CC test/bdev/bdevio/bdevio.o 00:02:38.731 LINK reset 00:02:38.731 CXX test/cpp_headers/fsdev_module.o 00:02:38.731 CXX test/cpp_headers/ftl.o 00:02:38.731 CXX test/cpp_headers/fuse_dispatcher.o 00:02:38.731 LINK arbitration 00:02:38.731 CC examples/nvme/hotplug/hotplug.o 00:02:38.988 CC test/nvme/sgl/sgl.o 00:02:38.988 CXX test/cpp_headers/gpt_spec.o 00:02:38.988 CC test/nvme/e2edp/nvme_dp.o 00:02:38.988 CC test/nvme/overhead/overhead.o 00:02:38.988 CXX test/cpp_headers/hexlify.o 00:02:38.988 LINK bdevio 00:02:38.988 LINK hotplug 00:02:38.988 CC test/nvme/err_injection/err_injection.o 00:02:39.246 LINK bdevperf 00:02:39.246 CXX test/cpp_headers/histogram_data.o 00:02:39.246 LINK spdk_top 00:02:39.246 LINK nvme_dp 00:02:39.246 LINK sgl 00:02:39.246 LINK err_injection 00:02:39.246 CXX test/cpp_headers/idxd.o 00:02:39.246 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:39.246 LINK overhead 00:02:39.246 CC app/vhost/vhost.o 00:02:39.246 CC test/nvme/startup/startup.o 00:02:39.503 CC test/nvme/simple_copy/simple_copy.o 00:02:39.503 CXX test/cpp_headers/idxd_spec.o 00:02:39.503 CC test/nvme/reserve/reserve.o 00:02:39.503 CC test/nvme/connect_stress/connect_stress.o 00:02:39.503 CC test/nvme/boot_partition/boot_partition.o 00:02:39.503 LINK cmb_copy 00:02:39.503 LINK startup 00:02:39.503 CXX test/cpp_headers/init.o 00:02:39.503 LINK vhost 00:02:39.503 CC test/nvme/compliance/nvme_compliance.o 00:02:39.503 LINK boot_partition 00:02:39.503 LINK connect_stress 00:02:39.503 LINK reserve 00:02:39.503 LINK simple_copy 00:02:39.503 CXX test/cpp_headers/ioat.o 00:02:39.761 CC examples/nvme/abort/abort.o 00:02:39.761 CXX test/cpp_headers/ioat_spec.o 00:02:39.761 CC test/nvme/fused_ordering/fused_ordering.o 00:02:39.761 CXX test/cpp_headers/iscsi_spec.o 00:02:39.761 CXX test/cpp_headers/json.o 00:02:39.761 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:39.761 CC app/spdk_dd/spdk_dd.o 00:02:39.761 CC test/nvme/fdp/fdp.o 00:02:39.761 CXX test/cpp_headers/jsonrpc.o 00:02:39.761 LINK nvme_compliance 00:02:39.761 CXX test/cpp_headers/keyring.o 00:02:40.017 LINK fused_ordering 00:02:40.017 CC test/nvme/cuse/cuse.o 00:02:40.017 LINK doorbell_aers 00:02:40.017 CXX test/cpp_headers/keyring_module.o 00:02:40.018 CXX test/cpp_headers/likely.o 00:02:40.018 CXX test/cpp_headers/log.o 00:02:40.018 LINK abort 00:02:40.018 CXX test/cpp_headers/lvol.o 00:02:40.018 LINK spdk_dd 00:02:40.018 CXX test/cpp_headers/md5.o 00:02:40.018 CXX test/cpp_headers/memory.o 00:02:40.018 CXX test/cpp_headers/mmio.o 00:02:40.018 LINK fdp 00:02:40.274 CXX test/cpp_headers/nbd.o 00:02:40.274 CXX test/cpp_headers/net.o 00:02:40.274 CXX test/cpp_headers/notify.o 00:02:40.274 CXX test/cpp_headers/nvme.o 00:02:40.274 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:40.274 CXX test/cpp_headers/nvme_intel.o 00:02:40.274 CXX test/cpp_headers/nvme_ocssd.o 00:02:40.274 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:40.274 CC app/fio/nvme/fio_plugin.o 00:02:40.274 CXX test/cpp_headers/nvme_spec.o 00:02:40.531 LINK pmr_persistence 00:02:40.531 CXX test/cpp_headers/nvme_zns.o 00:02:40.531 CXX test/cpp_headers/nvmf_cmd.o 00:02:40.531 CC app/fio/bdev/fio_plugin.o 00:02:40.531 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:40.531 CXX test/cpp_headers/nvmf.o 00:02:40.531 CXX test/cpp_headers/nvmf_spec.o 00:02:40.531 CXX test/cpp_headers/nvmf_transport.o 00:02:40.531 CXX test/cpp_headers/opal.o 00:02:40.531 CXX test/cpp_headers/opal_spec.o 00:02:40.821 CXX test/cpp_headers/pci_ids.o 00:02:40.821 CXX test/cpp_headers/pipe.o 00:02:40.821 CC examples/nvmf/nvmf/nvmf.o 00:02:40.821 CXX test/cpp_headers/queue.o 00:02:40.821 CXX test/cpp_headers/reduce.o 00:02:40.821 CXX test/cpp_headers/rpc.o 00:02:40.821 CXX test/cpp_headers/scheduler.o 00:02:40.821 CXX test/cpp_headers/scsi.o 00:02:40.821 CXX test/cpp_headers/scsi_spec.o 00:02:40.821 LINK nvmf 00:02:40.821 CXX test/cpp_headers/sock.o 00:02:40.821 LINK spdk_nvme 00:02:40.821 CXX test/cpp_headers/stdinc.o 00:02:40.821 CXX test/cpp_headers/string.o 00:02:41.078 CXX test/cpp_headers/thread.o 00:02:41.078 LINK spdk_bdev 00:02:41.078 CXX test/cpp_headers/trace.o 00:02:41.078 CXX test/cpp_headers/trace_parser.o 00:02:41.078 CXX test/cpp_headers/tree.o 00:02:41.078 CXX test/cpp_headers/ublk.o 00:02:41.078 CXX test/cpp_headers/util.o 00:02:41.078 CXX test/cpp_headers/uuid.o 00:02:41.078 CXX test/cpp_headers/version.o 00:02:41.078 CXX test/cpp_headers/vfio_user_pci.o 00:02:41.078 CXX test/cpp_headers/vfio_user_spec.o 00:02:41.078 CXX test/cpp_headers/vhost.o 00:02:41.078 CXX test/cpp_headers/vmd.o 00:02:41.078 CXX test/cpp_headers/xor.o 00:02:41.079 CXX test/cpp_headers/zipf.o 00:02:41.372 LINK cuse 00:02:43.271 LINK esnap 00:02:43.842 00:02:43.842 real 1m10.739s 00:02:43.842 user 6m38.115s 00:02:43.842 sys 1m8.544s 00:02:43.842 14:27:35 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:43.842 14:27:35 make -- common/autotest_common.sh@10 -- $ set +x 00:02:43.842 ************************************ 00:02:43.842 END TEST make 00:02:43.842 ************************************ 00:02:43.842 14:27:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:43.842 14:27:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:43.842 14:27:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:43.842 14:27:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.842 14:27:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:43.842 14:27:35 -- pm/common@44 -- $ pid=5018 00:02:43.842 14:27:35 -- pm/common@50 -- $ kill -TERM 5018 00:02:43.842 14:27:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.842 14:27:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:43.842 14:27:35 -- pm/common@44 -- $ pid=5019 00:02:43.842 14:27:35 -- pm/common@50 -- $ kill -TERM 5019 00:02:43.842 14:27:35 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:43.842 14:27:35 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:43.842 14:27:35 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:43.842 14:27:35 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:43.842 14:27:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:43.842 14:27:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:43.842 14:27:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:43.842 14:27:35 -- scripts/common.sh@336 -- # IFS=.-: 00:02:43.842 14:27:35 -- scripts/common.sh@336 -- # read -ra ver1 00:02:43.842 14:27:35 -- scripts/common.sh@337 -- # IFS=.-: 00:02:43.842 14:27:35 -- scripts/common.sh@337 -- # read -ra ver2 00:02:43.842 14:27:35 -- scripts/common.sh@338 -- # local 'op=<' 00:02:43.842 14:27:35 -- scripts/common.sh@340 -- # ver1_l=2 00:02:43.842 14:27:35 -- scripts/common.sh@341 -- # ver2_l=1 00:02:43.842 14:27:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:43.842 14:27:35 -- scripts/common.sh@344 -- # case "$op" in 00:02:43.842 14:27:35 -- scripts/common.sh@345 -- # : 1 00:02:43.842 14:27:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:43.842 14:27:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:43.842 14:27:35 -- scripts/common.sh@365 -- # decimal 1 00:02:43.842 14:27:35 -- scripts/common.sh@353 -- # local d=1 00:02:43.842 14:27:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:43.842 14:27:35 -- scripts/common.sh@355 -- # echo 1 00:02:43.842 14:27:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:43.842 14:27:35 -- scripts/common.sh@366 -- # decimal 2 00:02:43.842 14:27:35 -- scripts/common.sh@353 -- # local d=2 00:02:43.842 14:27:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:43.842 14:27:35 -- scripts/common.sh@355 -- # echo 2 00:02:43.842 14:27:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:43.842 14:27:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:43.842 14:27:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:43.842 14:27:35 -- scripts/common.sh@368 -- # return 0 00:02:43.842 14:27:35 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:43.842 14:27:35 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:43.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.842 --rc genhtml_branch_coverage=1 00:02:43.842 --rc genhtml_function_coverage=1 00:02:43.842 --rc genhtml_legend=1 00:02:43.842 --rc geninfo_all_blocks=1 00:02:43.842 --rc geninfo_unexecuted_blocks=1 00:02:43.842 00:02:43.842 ' 00:02:43.842 14:27:35 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:43.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.842 --rc genhtml_branch_coverage=1 00:02:43.842 --rc genhtml_function_coverage=1 00:02:43.842 --rc genhtml_legend=1 00:02:43.842 --rc geninfo_all_blocks=1 00:02:43.842 --rc geninfo_unexecuted_blocks=1 00:02:43.842 00:02:43.842 ' 00:02:43.842 14:27:35 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:43.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.842 --rc genhtml_branch_coverage=1 00:02:43.842 --rc genhtml_function_coverage=1 00:02:43.842 --rc genhtml_legend=1 00:02:43.842 --rc geninfo_all_blocks=1 00:02:43.842 --rc geninfo_unexecuted_blocks=1 00:02:43.842 00:02:43.842 ' 00:02:43.842 14:27:35 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:43.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.842 --rc genhtml_branch_coverage=1 00:02:43.842 --rc genhtml_function_coverage=1 00:02:43.842 --rc genhtml_legend=1 00:02:43.842 --rc geninfo_all_blocks=1 00:02:43.842 --rc geninfo_unexecuted_blocks=1 00:02:43.842 00:02:43.842 ' 00:02:43.842 14:27:35 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:43.842 14:27:35 -- nvmf/common.sh@7 -- # uname -s 00:02:43.842 14:27:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:43.842 14:27:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:43.842 14:27:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:43.842 14:27:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:43.842 14:27:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:43.842 14:27:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:43.843 14:27:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:43.843 14:27:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:43.843 14:27:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:43.843 14:27:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:43.843 14:27:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9bd3c322-ae73-4681-b7c6-8148d6e6f90c 00:02:43.843 14:27:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=9bd3c322-ae73-4681-b7c6-8148d6e6f90c 00:02:43.843 14:27:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:43.843 14:27:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:43.843 14:27:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:43.843 14:27:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:43.843 14:27:35 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:43.843 14:27:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:43.843 14:27:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:43.843 14:27:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:43.843 14:27:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:43.843 14:27:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.843 14:27:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.843 14:27:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.843 14:27:35 -- paths/export.sh@5 -- # export PATH 00:02:43.843 14:27:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.843 14:27:35 -- nvmf/common.sh@51 -- # : 0 00:02:43.843 14:27:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:43.843 14:27:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:43.843 14:27:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:43.843 14:27:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:43.843 14:27:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:43.843 14:27:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:43.843 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:43.843 14:27:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:43.843 14:27:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:43.843 14:27:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:43.843 14:27:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:43.843 14:27:35 -- spdk/autotest.sh@32 -- # uname -s 00:02:43.843 14:27:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:43.843 14:27:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:43.843 14:27:35 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:43.843 14:27:35 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:43.843 14:27:35 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:43.843 14:27:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:43.843 14:27:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:43.843 14:27:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:43.843 14:27:35 -- spdk/autotest.sh@48 -- # udevadm_pid=53748 00:02:43.843 14:27:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:43.843 14:27:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:43.843 14:27:35 -- pm/common@17 -- # local monitor 00:02:43.843 14:27:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.101 14:27:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.101 14:27:35 -- pm/common@25 -- # sleep 1 00:02:44.101 14:27:35 -- pm/common@21 -- # date +%s 00:02:44.101 14:27:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727792855 00:02:44.101 14:27:35 -- pm/common@21 -- # date +%s 00:02:44.101 14:27:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727792855 00:02:44.101 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727792855_collect-cpu-load.pm.log 00:02:44.101 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727792855_collect-vmstat.pm.log 00:02:45.036 14:27:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:45.036 14:27:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:45.036 14:27:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:45.036 14:27:36 -- common/autotest_common.sh@10 -- # set +x 00:02:45.036 14:27:36 -- spdk/autotest.sh@59 -- # create_test_list 00:02:45.036 14:27:36 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:45.036 14:27:36 -- common/autotest_common.sh@10 -- # set +x 00:02:45.036 14:27:36 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:45.037 14:27:36 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:45.037 14:27:36 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:02:45.037 14:27:36 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:45.037 14:27:36 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:02:45.037 14:27:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:45.037 14:27:36 -- common/autotest_common.sh@1455 -- # uname 00:02:45.037 14:27:36 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:45.037 14:27:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:45.037 14:27:36 -- common/autotest_common.sh@1475 -- # uname 00:02:45.037 14:27:36 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:45.037 14:27:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:45.037 14:27:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:45.037 lcov: LCOV version 1.15 00:02:45.037 14:27:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:02:59.902 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:59.902 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:14.812 14:28:05 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:14.812 14:28:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:14.812 14:28:05 -- common/autotest_common.sh@10 -- # set +x 00:03:14.812 14:28:05 -- spdk/autotest.sh@78 -- # rm -f 00:03:14.812 14:28:05 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:14.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:14.812 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:14.812 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:14.812 14:28:06 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:14.812 14:28:06 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:14.812 14:28:06 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:14.812 14:28:06 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:14.812 14:28:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:14.812 14:28:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:14.812 14:28:06 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:14.812 14:28:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.812 14:28:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:14.812 14:28:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:14.812 14:28:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:14.812 14:28:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:14.812 14:28:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:14.812 14:28:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:14.812 14:28:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:14.812 14:28:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:03:14.812 14:28:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:03:14.812 14:28:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:14.812 14:28:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:14.812 14:28:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:14.812 14:28:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:03:14.812 14:28:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:03:14.812 14:28:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:14.812 14:28:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:14.812 14:28:06 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:14.812 14:28:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:14.812 14:28:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:14.812 14:28:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:14.812 14:28:06 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:14.812 14:28:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:14.812 No valid GPT data, bailing 00:03:14.812 14:28:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.812 14:28:06 -- scripts/common.sh@394 -- # pt= 00:03:14.812 14:28:06 -- scripts/common.sh@395 -- # return 1 00:03:14.812 14:28:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:14.812 1+0 records in 00:03:14.812 1+0 records out 00:03:14.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540521 s, 194 MB/s 00:03:14.812 14:28:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:14.812 14:28:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:14.812 14:28:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:14.812 14:28:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:14.812 14:28:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:14.812 No valid GPT data, bailing 00:03:14.812 14:28:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:14.812 14:28:06 -- scripts/common.sh@394 -- # pt= 00:03:14.812 14:28:06 -- scripts/common.sh@395 -- # return 1 00:03:14.812 14:28:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:14.812 1+0 records in 00:03:14.812 1+0 records out 00:03:14.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542213 s, 193 MB/s 00:03:14.812 14:28:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:14.812 14:28:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:14.812 14:28:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:14.812 14:28:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:14.812 14:28:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:14.812 No valid GPT data, bailing 00:03:14.812 14:28:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:14.812 14:28:06 -- scripts/common.sh@394 -- # pt= 00:03:14.812 14:28:06 -- scripts/common.sh@395 -- # return 1 00:03:14.812 14:28:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:14.812 1+0 records in 00:03:14.812 1+0 records out 00:03:14.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00584421 s, 179 MB/s 00:03:14.812 14:28:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:14.812 14:28:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:14.812 14:28:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:14.812 14:28:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:14.812 14:28:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:14.812 No valid GPT data, bailing 00:03:14.812 14:28:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:14.812 14:28:06 -- scripts/common.sh@394 -- # pt= 00:03:14.812 14:28:06 -- scripts/common.sh@395 -- # return 1 00:03:14.812 14:28:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:14.812 1+0 records in 00:03:14.812 1+0 records out 00:03:14.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00561123 s, 187 MB/s 00:03:14.812 14:28:06 -- spdk/autotest.sh@105 -- # sync 00:03:14.812 14:28:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:14.812 14:28:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:14.812 14:28:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.728 14:28:08 -- spdk/autotest.sh@111 -- # uname -s 00:03:16.728 14:28:08 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:16.728 14:28:08 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:16.728 14:28:08 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:17.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:17.301 Hugepages 00:03:17.301 node hugesize free / total 00:03:17.301 node0 1048576kB 0 / 0 00:03:17.301 node0 2048kB 0 / 0 00:03:17.301 00:03:17.301 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:17.301 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:17.301 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:17.301 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:17.301 14:28:08 -- spdk/autotest.sh@117 -- # uname -s 00:03:17.301 14:28:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:17.301 14:28:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:17.301 14:28:08 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:17.969 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:17.969 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:18.231 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:18.231 14:28:09 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:19.174 14:28:10 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:19.174 14:28:10 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:19.174 14:28:10 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:19.174 14:28:10 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:19.174 14:28:10 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:19.174 14:28:10 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:19.174 14:28:10 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:19.174 14:28:10 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:19.174 14:28:10 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:19.175 14:28:10 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:03:19.175 14:28:10 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:19.175 14:28:10 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:19.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:19.436 Waiting for block devices as requested 00:03:19.436 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:19.697 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:19.697 14:28:11 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:19.697 14:28:11 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:19.697 14:28:11 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:19.697 14:28:11 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:03:19.697 14:28:11 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:19.697 14:28:11 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:19.697 14:28:11 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:19.697 14:28:11 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:03:19.697 14:28:11 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:03:19.697 14:28:11 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:03:19.697 14:28:11 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:19.697 14:28:11 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:03:19.697 14:28:11 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:19.697 14:28:11 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:19.697 14:28:11 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:19.697 14:28:11 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:19.697 14:28:11 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:03:19.697 14:28:11 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:19.697 14:28:11 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:19.697 14:28:11 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:19.697 14:28:11 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:19.697 14:28:11 -- common/autotest_common.sh@1541 -- # continue 00:03:19.697 14:28:11 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:19.697 14:28:11 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:19.697 14:28:11 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:19.697 14:28:11 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:03:19.697 14:28:11 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:19.697 14:28:11 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:19.697 14:28:11 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:19.697 14:28:11 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:19.697 14:28:11 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:19.697 14:28:11 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:19.697 14:28:11 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:19.697 14:28:11 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:19.697 14:28:11 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:19.697 14:28:11 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:19.697 14:28:11 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:19.697 14:28:11 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:19.697 14:28:11 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:19.697 14:28:11 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:19.697 14:28:11 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:19.697 14:28:11 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:19.697 14:28:11 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:19.697 14:28:11 -- common/autotest_common.sh@1541 -- # continue 00:03:19.697 14:28:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:19.697 14:28:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:19.697 14:28:11 -- common/autotest_common.sh@10 -- # set +x 00:03:19.697 14:28:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:19.697 14:28:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:19.697 14:28:11 -- common/autotest_common.sh@10 -- # set +x 00:03:19.697 14:28:11 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:20.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:20.532 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:20.532 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:20.532 14:28:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:20.532 14:28:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:20.532 14:28:12 -- common/autotest_common.sh@10 -- # set +x 00:03:20.532 14:28:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:20.532 14:28:12 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:20.532 14:28:12 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:20.532 14:28:12 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:20.532 14:28:12 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:20.532 14:28:12 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:20.532 14:28:12 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:20.532 14:28:12 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:20.532 14:28:12 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:20.532 14:28:12 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:20.532 14:28:12 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:20.532 14:28:12 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:20.532 14:28:12 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:20.794 14:28:12 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:03:20.794 14:28:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:20.794 14:28:12 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:20.794 14:28:12 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:20.794 14:28:12 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:20.794 14:28:12 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:20.794 14:28:12 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:20.794 14:28:12 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:20.794 14:28:12 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:20.794 14:28:12 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:20.794 14:28:12 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:20.794 14:28:12 -- common/autotest_common.sh@1570 -- # return 0 00:03:20.794 14:28:12 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:20.794 14:28:12 -- common/autotest_common.sh@1578 -- # return 0 00:03:20.794 14:28:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:20.794 14:28:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:20.794 14:28:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:20.794 14:28:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:20.794 14:28:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:20.794 14:28:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:20.794 14:28:12 -- common/autotest_common.sh@10 -- # set +x 00:03:20.794 14:28:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:20.794 14:28:12 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:20.794 14:28:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:20.794 14:28:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:20.794 14:28:12 -- common/autotest_common.sh@10 -- # set +x 00:03:20.794 ************************************ 00:03:20.794 START TEST env 00:03:20.794 ************************************ 00:03:20.794 14:28:12 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:20.794 * Looking for test storage... 00:03:20.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:20.794 14:28:12 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:20.794 14:28:12 env -- common/autotest_common.sh@1681 -- # lcov --version 00:03:20.794 14:28:12 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:20.794 14:28:12 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:20.794 14:28:12 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:20.794 14:28:12 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:20.794 14:28:12 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:20.794 14:28:12 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:20.794 14:28:12 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:20.794 14:28:12 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:20.794 14:28:12 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:20.794 14:28:12 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:20.794 14:28:12 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:20.794 14:28:12 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:20.794 14:28:12 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:20.794 14:28:12 env -- scripts/common.sh@344 -- # case "$op" in 00:03:20.794 14:28:12 env -- scripts/common.sh@345 -- # : 1 00:03:20.794 14:28:12 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:20.794 14:28:12 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:20.794 14:28:12 env -- scripts/common.sh@365 -- # decimal 1 00:03:20.794 14:28:12 env -- scripts/common.sh@353 -- # local d=1 00:03:20.794 14:28:12 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:20.794 14:28:12 env -- scripts/common.sh@355 -- # echo 1 00:03:20.794 14:28:12 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:20.794 14:28:12 env -- scripts/common.sh@366 -- # decimal 2 00:03:20.794 14:28:12 env -- scripts/common.sh@353 -- # local d=2 00:03:20.794 14:28:12 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:20.794 14:28:12 env -- scripts/common.sh@355 -- # echo 2 00:03:20.794 14:28:12 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:20.794 14:28:12 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:20.794 14:28:12 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:20.794 14:28:12 env -- scripts/common.sh@368 -- # return 0 00:03:20.794 14:28:12 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:20.794 14:28:12 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:20.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.794 --rc genhtml_branch_coverage=1 00:03:20.794 --rc genhtml_function_coverage=1 00:03:20.794 --rc genhtml_legend=1 00:03:20.794 --rc geninfo_all_blocks=1 00:03:20.794 --rc geninfo_unexecuted_blocks=1 00:03:20.794 00:03:20.794 ' 00:03:20.794 14:28:12 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:20.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.794 --rc genhtml_branch_coverage=1 00:03:20.794 --rc genhtml_function_coverage=1 00:03:20.794 --rc genhtml_legend=1 00:03:20.794 --rc geninfo_all_blocks=1 00:03:20.794 --rc geninfo_unexecuted_blocks=1 00:03:20.794 00:03:20.794 ' 00:03:20.794 14:28:12 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:20.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.794 --rc genhtml_branch_coverage=1 00:03:20.794 --rc genhtml_function_coverage=1 00:03:20.794 --rc genhtml_legend=1 00:03:20.794 --rc geninfo_all_blocks=1 00:03:20.794 --rc geninfo_unexecuted_blocks=1 00:03:20.794 00:03:20.794 ' 00:03:20.794 14:28:12 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:20.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.794 --rc genhtml_branch_coverage=1 00:03:20.794 --rc genhtml_function_coverage=1 00:03:20.794 --rc genhtml_legend=1 00:03:20.794 --rc geninfo_all_blocks=1 00:03:20.794 --rc geninfo_unexecuted_blocks=1 00:03:20.794 00:03:20.794 ' 00:03:20.794 14:28:12 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:20.794 14:28:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:20.794 14:28:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:20.794 14:28:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.794 ************************************ 00:03:20.794 START TEST env_memory 00:03:20.794 ************************************ 00:03:20.794 14:28:12 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:20.794 00:03:20.794 00:03:20.794 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.794 http://cunit.sourceforge.net/ 00:03:20.794 00:03:20.794 00:03:20.794 Suite: memory 00:03:21.054 Test: alloc and free memory map ...[2024-10-01 14:28:12.490006] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:21.054 passed 00:03:21.054 Test: mem map translation ...[2024-10-01 14:28:12.529992] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:21.054 [2024-10-01 14:28:12.530048] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:21.054 [2024-10-01 14:28:12.530126] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:21.054 [2024-10-01 14:28:12.530148] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:21.054 passed 00:03:21.054 Test: mem map registration ...[2024-10-01 14:28:12.602988] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:21.054 [2024-10-01 14:28:12.603047] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:21.054 passed 00:03:21.054 Test: mem map adjacent registrations ...passed 00:03:21.054 00:03:21.054 Run Summary: Type Total Ran Passed Failed Inactive 00:03:21.054 suites 1 1 n/a 0 0 00:03:21.054 tests 4 4 4 0 0 00:03:21.054 asserts 152 152 152 0 n/a 00:03:21.054 00:03:21.054 Elapsed time = 0.240 seconds 00:03:21.054 00:03:21.054 real 0m0.275s 00:03:21.054 user 0m0.241s 00:03:21.054 sys 0m0.024s 00:03:21.054 14:28:12 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:21.054 ************************************ 00:03:21.054 END TEST env_memory 00:03:21.054 ************************************ 00:03:21.054 14:28:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:21.315 14:28:12 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:21.315 14:28:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:21.315 14:28:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:21.315 14:28:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:21.315 ************************************ 00:03:21.315 START TEST env_vtophys 00:03:21.315 ************************************ 00:03:21.315 14:28:12 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:21.315 EAL: lib.eal log level changed from notice to debug 00:03:21.315 EAL: Detected lcore 0 as core 0 on socket 0 00:03:21.315 EAL: Detected lcore 1 as core 0 on socket 0 00:03:21.315 EAL: Detected lcore 2 as core 0 on socket 0 00:03:21.315 EAL: Detected lcore 3 as core 0 on socket 0 00:03:21.315 EAL: Detected lcore 4 as core 0 on socket 0 00:03:21.315 EAL: Detected lcore 5 as core 0 on socket 0 00:03:21.315 EAL: Detected lcore 6 as core 0 on socket 0 00:03:21.315 EAL: Detected lcore 7 as core 0 on socket 0 00:03:21.315 EAL: Detected lcore 8 as core 0 on socket 0 00:03:21.315 EAL: Detected lcore 9 as core 0 on socket 0 00:03:21.315 EAL: Maximum logical cores by configuration: 128 00:03:21.315 EAL: Detected CPU lcores: 10 00:03:21.315 EAL: Detected NUMA nodes: 1 00:03:21.315 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:21.315 EAL: Detected shared linkage of DPDK 00:03:21.315 EAL: No shared files mode enabled, IPC will be disabled 00:03:21.315 EAL: Selected IOVA mode 'PA' 00:03:21.315 EAL: Probing VFIO support... 00:03:21.316 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:21.316 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:21.316 EAL: Ask a virtual area of 0x2e000 bytes 00:03:21.316 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:21.316 EAL: Setting up physically contiguous memory... 00:03:21.316 EAL: Setting maximum number of open files to 524288 00:03:21.316 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:21.316 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:21.316 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.316 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:21.316 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.316 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.316 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:21.316 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:21.316 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.316 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:21.316 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.316 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.316 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:21.316 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:21.316 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.316 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:21.316 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.316 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.316 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:21.316 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:21.316 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.316 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:21.316 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.316 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.316 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:21.316 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:21.316 EAL: Hugepages will be freed exactly as allocated. 00:03:21.316 EAL: No shared files mode enabled, IPC is disabled 00:03:21.316 EAL: No shared files mode enabled, IPC is disabled 00:03:21.316 EAL: TSC frequency is ~2600000 KHz 00:03:21.316 EAL: Main lcore 0 is ready (tid=7fd178c0fa40;cpuset=[0]) 00:03:21.316 EAL: Trying to obtain current memory policy. 00:03:21.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.316 EAL: Restoring previous memory policy: 0 00:03:21.316 EAL: request: mp_malloc_sync 00:03:21.316 EAL: No shared files mode enabled, IPC is disabled 00:03:21.316 EAL: Heap on socket 0 was expanded by 2MB 00:03:21.316 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:21.316 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:21.316 EAL: Mem event callback 'spdk:(nil)' registered 00:03:21.316 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:21.316 00:03:21.316 00:03:21.316 CUnit - A unit testing framework for C - Version 2.1-3 00:03:21.316 http://cunit.sourceforge.net/ 00:03:21.316 00:03:21.316 00:03:21.316 Suite: components_suite 00:03:21.577 Test: vtophys_malloc_test ...passed 00:03:21.838 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:21.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.838 EAL: Restoring previous memory policy: 4 00:03:21.838 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.838 EAL: request: mp_malloc_sync 00:03:21.838 EAL: No shared files mode enabled, IPC is disabled 00:03:21.838 EAL: Heap on socket 0 was expanded by 4MB 00:03:21.838 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.838 EAL: request: mp_malloc_sync 00:03:21.838 EAL: No shared files mode enabled, IPC is disabled 00:03:21.838 EAL: Heap on socket 0 was shrunk by 4MB 00:03:21.838 EAL: Trying to obtain current memory policy. 00:03:21.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.838 EAL: Restoring previous memory policy: 4 00:03:21.838 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.838 EAL: request: mp_malloc_sync 00:03:21.838 EAL: No shared files mode enabled, IPC is disabled 00:03:21.838 EAL: Heap on socket 0 was expanded by 6MB 00:03:21.838 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.838 EAL: request: mp_malloc_sync 00:03:21.838 EAL: No shared files mode enabled, IPC is disabled 00:03:21.838 EAL: Heap on socket 0 was shrunk by 6MB 00:03:21.838 EAL: Trying to obtain current memory policy. 00:03:21.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.838 EAL: Restoring previous memory policy: 4 00:03:21.838 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.838 EAL: request: mp_malloc_sync 00:03:21.838 EAL: No shared files mode enabled, IPC is disabled 00:03:21.838 EAL: Heap on socket 0 was expanded by 10MB 00:03:21.838 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.838 EAL: request: mp_malloc_sync 00:03:21.838 EAL: No shared files mode enabled, IPC is disabled 00:03:21.838 EAL: Heap on socket 0 was shrunk by 10MB 00:03:21.838 EAL: Trying to obtain current memory policy. 00:03:21.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.839 EAL: Restoring previous memory policy: 4 00:03:21.839 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.839 EAL: request: mp_malloc_sync 00:03:21.839 EAL: No shared files mode enabled, IPC is disabled 00:03:21.839 EAL: Heap on socket 0 was expanded by 18MB 00:03:21.839 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.839 EAL: request: mp_malloc_sync 00:03:21.839 EAL: No shared files mode enabled, IPC is disabled 00:03:21.839 EAL: Heap on socket 0 was shrunk by 18MB 00:03:21.839 EAL: Trying to obtain current memory policy. 00:03:21.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.839 EAL: Restoring previous memory policy: 4 00:03:21.839 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.839 EAL: request: mp_malloc_sync 00:03:21.839 EAL: No shared files mode enabled, IPC is disabled 00:03:21.839 EAL: Heap on socket 0 was expanded by 34MB 00:03:21.839 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.839 EAL: request: mp_malloc_sync 00:03:21.839 EAL: No shared files mode enabled, IPC is disabled 00:03:21.839 EAL: Heap on socket 0 was shrunk by 34MB 00:03:21.839 EAL: Trying to obtain current memory policy. 00:03:21.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.839 EAL: Restoring previous memory policy: 4 00:03:21.839 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.839 EAL: request: mp_malloc_sync 00:03:21.839 EAL: No shared files mode enabled, IPC is disabled 00:03:21.839 EAL: Heap on socket 0 was expanded by 66MB 00:03:21.839 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.839 EAL: request: mp_malloc_sync 00:03:21.839 EAL: No shared files mode enabled, IPC is disabled 00:03:21.839 EAL: Heap on socket 0 was shrunk by 66MB 00:03:22.100 EAL: Trying to obtain current memory policy. 00:03:22.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.100 EAL: Restoring previous memory policy: 4 00:03:22.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.100 EAL: request: mp_malloc_sync 00:03:22.100 EAL: No shared files mode enabled, IPC is disabled 00:03:22.100 EAL: Heap on socket 0 was expanded by 130MB 00:03:22.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.100 EAL: request: mp_malloc_sync 00:03:22.100 EAL: No shared files mode enabled, IPC is disabled 00:03:22.100 EAL: Heap on socket 0 was shrunk by 130MB 00:03:22.360 EAL: Trying to obtain current memory policy. 00:03:22.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.360 EAL: Restoring previous memory policy: 4 00:03:22.360 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.360 EAL: request: mp_malloc_sync 00:03:22.360 EAL: No shared files mode enabled, IPC is disabled 00:03:22.360 EAL: Heap on socket 0 was expanded by 258MB 00:03:22.620 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.620 EAL: request: mp_malloc_sync 00:03:22.620 EAL: No shared files mode enabled, IPC is disabled 00:03:22.620 EAL: Heap on socket 0 was shrunk by 258MB 00:03:22.882 EAL: Trying to obtain current memory policy. 00:03:22.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.882 EAL: Restoring previous memory policy: 4 00:03:22.882 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.882 EAL: request: mp_malloc_sync 00:03:22.882 EAL: No shared files mode enabled, IPC is disabled 00:03:22.882 EAL: Heap on socket 0 was expanded by 514MB 00:03:23.456 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.456 EAL: request: mp_malloc_sync 00:03:23.456 EAL: No shared files mode enabled, IPC is disabled 00:03:23.456 EAL: Heap on socket 0 was shrunk by 514MB 00:03:24.027 EAL: Trying to obtain current memory policy. 00:03:24.027 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:24.287 EAL: Restoring previous memory policy: 4 00:03:24.287 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.287 EAL: request: mp_malloc_sync 00:03:24.287 EAL: No shared files mode enabled, IPC is disabled 00:03:24.287 EAL: Heap on socket 0 was expanded by 1026MB 00:03:25.226 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.487 EAL: request: mp_malloc_sync 00:03:25.487 EAL: No shared files mode enabled, IPC is disabled 00:03:25.487 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:26.426 passed 00:03:26.426 00:03:26.426 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.426 suites 1 1 n/a 0 0 00:03:26.426 tests 2 2 2 0 0 00:03:26.426 asserts 5852 5852 5852 0 n/a 00:03:26.426 00:03:26.426 Elapsed time = 5.040 seconds 00:03:26.426 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.426 EAL: request: mp_malloc_sync 00:03:26.426 EAL: No shared files mode enabled, IPC is disabled 00:03:26.426 EAL: Heap on socket 0 was shrunk by 2MB 00:03:26.426 EAL: No shared files mode enabled, IPC is disabled 00:03:26.426 EAL: No shared files mode enabled, IPC is disabled 00:03:26.426 EAL: No shared files mode enabled, IPC is disabled 00:03:26.426 00:03:26.426 real 0m5.297s 00:03:26.426 user 0m4.505s 00:03:26.426 sys 0m0.644s 00:03:26.426 14:28:18 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:26.426 ************************************ 00:03:26.426 END TEST env_vtophys 00:03:26.426 ************************************ 00:03:26.426 14:28:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:26.687 14:28:18 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:26.687 14:28:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:26.687 14:28:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:26.687 14:28:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.687 ************************************ 00:03:26.687 START TEST env_pci 00:03:26.687 ************************************ 00:03:26.687 14:28:18 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:26.687 00:03:26.687 00:03:26.687 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.687 http://cunit.sourceforge.net/ 00:03:26.687 00:03:26.687 00:03:26.687 Suite: pci 00:03:26.687 Test: pci_hook ...[2024-10-01 14:28:18.173872] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55985 has claimed it 00:03:26.687 passed 00:03:26.687 00:03:26.687 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.687 suites 1 1 n/a 0 0 00:03:26.687 tests 1 1 1 0 0 00:03:26.687 asserts 25 25 25 0 n/a 00:03:26.687 00:03:26.687 Elapsed time = 0.006 seconds 00:03:26.687 EAL: Cannot find device (10000:00:01.0) 00:03:26.687 EAL: Failed to attach device on primary process 00:03:26.687 00:03:26.687 real 0m0.065s 00:03:26.687 user 0m0.028s 00:03:26.687 sys 0m0.036s 00:03:26.687 14:28:18 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:26.687 14:28:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:26.687 ************************************ 00:03:26.687 END TEST env_pci 00:03:26.687 ************************************ 00:03:26.687 14:28:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:26.687 14:28:18 env -- env/env.sh@15 -- # uname 00:03:26.687 14:28:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:26.687 14:28:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:26.688 14:28:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:26.688 14:28:18 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:26.688 14:28:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:26.688 14:28:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.688 ************************************ 00:03:26.688 START TEST env_dpdk_post_init 00:03:26.688 ************************************ 00:03:26.688 14:28:18 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:26.688 EAL: Detected CPU lcores: 10 00:03:26.688 EAL: Detected NUMA nodes: 1 00:03:26.688 EAL: Detected shared linkage of DPDK 00:03:26.688 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:26.688 EAL: Selected IOVA mode 'PA' 00:03:26.948 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:26.948 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:26.948 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:26.948 Starting DPDK initialization... 00:03:26.948 Starting SPDK post initialization... 00:03:26.948 SPDK NVMe probe 00:03:26.948 Attaching to 0000:00:10.0 00:03:26.948 Attaching to 0000:00:11.0 00:03:26.948 Attached to 0000:00:10.0 00:03:26.948 Attached to 0000:00:11.0 00:03:26.948 Cleaning up... 00:03:26.948 00:03:26.948 real 0m0.229s 00:03:26.948 user 0m0.063s 00:03:26.948 sys 0m0.064s 00:03:26.948 ************************************ 00:03:26.948 END TEST env_dpdk_post_init 00:03:26.948 ************************************ 00:03:26.948 14:28:18 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:26.948 14:28:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:26.948 14:28:18 env -- env/env.sh@26 -- # uname 00:03:26.948 14:28:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:26.948 14:28:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:26.948 14:28:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:26.948 14:28:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:26.948 14:28:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.948 ************************************ 00:03:26.948 START TEST env_mem_callbacks 00:03:26.948 ************************************ 00:03:26.948 14:28:18 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:26.948 EAL: Detected CPU lcores: 10 00:03:26.948 EAL: Detected NUMA nodes: 1 00:03:26.948 EAL: Detected shared linkage of DPDK 00:03:26.948 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:26.948 EAL: Selected IOVA mode 'PA' 00:03:27.210 00:03:27.210 00:03:27.210 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.210 http://cunit.sourceforge.net/ 00:03:27.210 00:03:27.210 00:03:27.210 Suite: memory 00:03:27.210 Test: test ... 00:03:27.210 register 0x200000200000 2097152 00:03:27.210 malloc 3145728 00:03:27.210 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:27.210 register 0x200000400000 4194304 00:03:27.210 buf 0x2000004fffc0 len 3145728 PASSED 00:03:27.210 malloc 64 00:03:27.210 buf 0x2000004ffec0 len 64 PASSED 00:03:27.210 malloc 4194304 00:03:27.210 register 0x200000800000 6291456 00:03:27.210 buf 0x2000009fffc0 len 4194304 PASSED 00:03:27.210 free 0x2000004fffc0 3145728 00:03:27.210 free 0x2000004ffec0 64 00:03:27.210 unregister 0x200000400000 4194304 PASSED 00:03:27.210 free 0x2000009fffc0 4194304 00:03:27.210 unregister 0x200000800000 6291456 PASSED 00:03:27.210 malloc 8388608 00:03:27.210 register 0x200000400000 10485760 00:03:27.210 buf 0x2000005fffc0 len 8388608 PASSED 00:03:27.210 free 0x2000005fffc0 8388608 00:03:27.210 unregister 0x200000400000 10485760 PASSED 00:03:27.210 passed 00:03:27.210 00:03:27.210 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.210 suites 1 1 n/a 0 0 00:03:27.210 tests 1 1 1 0 0 00:03:27.210 asserts 15 15 15 0 n/a 00:03:27.210 00:03:27.210 Elapsed time = 0.046 seconds 00:03:27.210 00:03:27.210 real 0m0.214s 00:03:27.210 user 0m0.069s 00:03:27.210 sys 0m0.042s 00:03:27.210 14:28:18 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:27.210 ************************************ 00:03:27.210 END TEST env_mem_callbacks 00:03:27.210 ************************************ 00:03:27.210 14:28:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:27.210 00:03:27.210 real 0m6.549s 00:03:27.210 user 0m5.054s 00:03:27.210 sys 0m1.028s 00:03:27.210 ************************************ 00:03:27.210 END TEST env 00:03:27.210 ************************************ 00:03:27.210 14:28:18 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:27.210 14:28:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:27.210 14:28:18 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:27.210 14:28:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.210 14:28:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.210 14:28:18 -- common/autotest_common.sh@10 -- # set +x 00:03:27.210 ************************************ 00:03:27.210 START TEST rpc 00:03:27.210 ************************************ 00:03:27.210 14:28:18 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:27.471 * Looking for test storage... 00:03:27.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:27.471 14:28:18 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:27.471 14:28:18 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:27.471 14:28:18 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:27.471 14:28:19 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:27.471 14:28:19 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:27.471 14:28:19 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:27.471 14:28:19 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:27.471 14:28:19 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:27.471 14:28:19 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:27.471 14:28:19 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:27.471 14:28:19 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:27.471 14:28:19 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:27.471 14:28:19 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:27.471 14:28:19 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:27.471 14:28:19 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:27.471 14:28:19 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:27.471 14:28:19 rpc -- scripts/common.sh@345 -- # : 1 00:03:27.471 14:28:19 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:27.471 14:28:19 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:27.471 14:28:19 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:27.471 14:28:19 rpc -- scripts/common.sh@353 -- # local d=1 00:03:27.471 14:28:19 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:27.471 14:28:19 rpc -- scripts/common.sh@355 -- # echo 1 00:03:27.471 14:28:19 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:27.471 14:28:19 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:27.471 14:28:19 rpc -- scripts/common.sh@353 -- # local d=2 00:03:27.471 14:28:19 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:27.471 14:28:19 rpc -- scripts/common.sh@355 -- # echo 2 00:03:27.471 14:28:19 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:27.471 14:28:19 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:27.471 14:28:19 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:27.471 14:28:19 rpc -- scripts/common.sh@368 -- # return 0 00:03:27.471 14:28:19 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:27.471 14:28:19 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:27.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.471 --rc genhtml_branch_coverage=1 00:03:27.471 --rc genhtml_function_coverage=1 00:03:27.471 --rc genhtml_legend=1 00:03:27.471 --rc geninfo_all_blocks=1 00:03:27.471 --rc geninfo_unexecuted_blocks=1 00:03:27.471 00:03:27.471 ' 00:03:27.471 14:28:19 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:27.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.471 --rc genhtml_branch_coverage=1 00:03:27.471 --rc genhtml_function_coverage=1 00:03:27.471 --rc genhtml_legend=1 00:03:27.471 --rc geninfo_all_blocks=1 00:03:27.471 --rc geninfo_unexecuted_blocks=1 00:03:27.471 00:03:27.471 ' 00:03:27.471 14:28:19 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:27.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.471 --rc genhtml_branch_coverage=1 00:03:27.471 --rc genhtml_function_coverage=1 00:03:27.471 --rc genhtml_legend=1 00:03:27.471 --rc geninfo_all_blocks=1 00:03:27.471 --rc geninfo_unexecuted_blocks=1 00:03:27.471 00:03:27.471 ' 00:03:27.471 14:28:19 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:27.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.471 --rc genhtml_branch_coverage=1 00:03:27.471 --rc genhtml_function_coverage=1 00:03:27.471 --rc genhtml_legend=1 00:03:27.471 --rc geninfo_all_blocks=1 00:03:27.471 --rc geninfo_unexecuted_blocks=1 00:03:27.471 00:03:27.471 ' 00:03:27.471 14:28:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56112 00:03:27.471 14:28:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:27.471 14:28:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56112 00:03:27.471 14:28:19 rpc -- common/autotest_common.sh@831 -- # '[' -z 56112 ']' 00:03:27.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:27.471 14:28:19 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:27.471 14:28:19 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:27.471 14:28:19 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:27.471 14:28:19 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:27.471 14:28:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.471 14:28:19 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:27.471 [2024-10-01 14:28:19.098389] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:03:27.471 [2024-10-01 14:28:19.098527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56112 ] 00:03:27.731 [2024-10-01 14:28:19.250453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:27.992 [2024-10-01 14:28:19.435473] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:27.992 [2024-10-01 14:28:19.435523] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56112' to capture a snapshot of events at runtime. 00:03:27.992 [2024-10-01 14:28:19.435534] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:27.992 [2024-10-01 14:28:19.435545] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:27.992 [2024-10-01 14:28:19.435553] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56112 for offline analysis/debug. 00:03:27.992 [2024-10-01 14:28:19.435587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:28.565 14:28:20 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:28.565 14:28:20 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:28.565 14:28:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:28.565 14:28:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:28.565 14:28:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:28.565 14:28:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:28.565 14:28:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:28.565 14:28:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:28.565 14:28:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.565 ************************************ 00:03:28.565 START TEST rpc_integrity 00:03:28.565 ************************************ 00:03:28.565 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:28.565 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:28.565 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.565 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.565 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.565 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:28.565 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:28.565 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:28.565 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:28.565 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.565 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.565 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.565 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:28.565 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:28.565 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.565 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.565 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.565 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:28.565 { 00:03:28.565 "name": "Malloc0", 00:03:28.565 "aliases": [ 00:03:28.565 "c814b763-820b-4a01-9a43-4a7583cf44e6" 00:03:28.565 ], 00:03:28.565 "product_name": "Malloc disk", 00:03:28.565 "block_size": 512, 00:03:28.565 "num_blocks": 16384, 00:03:28.565 "uuid": "c814b763-820b-4a01-9a43-4a7583cf44e6", 00:03:28.565 "assigned_rate_limits": { 00:03:28.565 "rw_ios_per_sec": 0, 00:03:28.565 "rw_mbytes_per_sec": 0, 00:03:28.565 "r_mbytes_per_sec": 0, 00:03:28.565 "w_mbytes_per_sec": 0 00:03:28.565 }, 00:03:28.565 "claimed": false, 00:03:28.565 "zoned": false, 00:03:28.565 "supported_io_types": { 00:03:28.565 "read": true, 00:03:28.565 "write": true, 00:03:28.565 "unmap": true, 00:03:28.565 "flush": true, 00:03:28.565 "reset": true, 00:03:28.565 "nvme_admin": false, 00:03:28.565 "nvme_io": false, 00:03:28.565 "nvme_io_md": false, 00:03:28.565 "write_zeroes": true, 00:03:28.565 "zcopy": true, 00:03:28.565 "get_zone_info": false, 00:03:28.565 "zone_management": false, 00:03:28.565 "zone_append": false, 00:03:28.565 "compare": false, 00:03:28.565 "compare_and_write": false, 00:03:28.565 "abort": true, 00:03:28.565 "seek_hole": false, 00:03:28.565 "seek_data": false, 00:03:28.565 "copy": true, 00:03:28.565 "nvme_iov_md": false 00:03:28.565 }, 00:03:28.565 "memory_domains": [ 00:03:28.565 { 00:03:28.565 "dma_device_id": "system", 00:03:28.565 "dma_device_type": 1 00:03:28.565 }, 00:03:28.565 { 00:03:28.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.565 "dma_device_type": 2 00:03:28.565 } 00:03:28.565 ], 00:03:28.565 "driver_specific": {} 00:03:28.565 } 00:03:28.565 ]' 00:03:28.565 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:28.565 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:28.565 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:28.565 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.565 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.565 [2024-10-01 14:28:20.155553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:28.565 [2024-10-01 14:28:20.155602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:28.566 [2024-10-01 14:28:20.155624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:03:28.566 [2024-10-01 14:28:20.155635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:28.566 [2024-10-01 14:28:20.157823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:28.566 [2024-10-01 14:28:20.157856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:28.566 Passthru0 00:03:28.566 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.566 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:28.566 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.566 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.566 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.566 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:28.566 { 00:03:28.566 "name": "Malloc0", 00:03:28.566 "aliases": [ 00:03:28.566 "c814b763-820b-4a01-9a43-4a7583cf44e6" 00:03:28.566 ], 00:03:28.566 "product_name": "Malloc disk", 00:03:28.566 "block_size": 512, 00:03:28.566 "num_blocks": 16384, 00:03:28.566 "uuid": "c814b763-820b-4a01-9a43-4a7583cf44e6", 00:03:28.566 "assigned_rate_limits": { 00:03:28.566 "rw_ios_per_sec": 0, 00:03:28.566 "rw_mbytes_per_sec": 0, 00:03:28.566 "r_mbytes_per_sec": 0, 00:03:28.566 "w_mbytes_per_sec": 0 00:03:28.566 }, 00:03:28.566 "claimed": true, 00:03:28.566 "claim_type": "exclusive_write", 00:03:28.566 "zoned": false, 00:03:28.566 "supported_io_types": { 00:03:28.566 "read": true, 00:03:28.566 "write": true, 00:03:28.566 "unmap": true, 00:03:28.566 "flush": true, 00:03:28.566 "reset": true, 00:03:28.566 "nvme_admin": false, 00:03:28.566 "nvme_io": false, 00:03:28.566 "nvme_io_md": false, 00:03:28.566 "write_zeroes": true, 00:03:28.566 "zcopy": true, 00:03:28.566 "get_zone_info": false, 00:03:28.566 "zone_management": false, 00:03:28.566 "zone_append": false, 00:03:28.566 "compare": false, 00:03:28.566 "compare_and_write": false, 00:03:28.566 "abort": true, 00:03:28.566 "seek_hole": false, 00:03:28.566 "seek_data": false, 00:03:28.566 "copy": true, 00:03:28.566 "nvme_iov_md": false 00:03:28.566 }, 00:03:28.566 "memory_domains": [ 00:03:28.566 { 00:03:28.566 "dma_device_id": "system", 00:03:28.566 "dma_device_type": 1 00:03:28.566 }, 00:03:28.566 { 00:03:28.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.566 "dma_device_type": 2 00:03:28.566 } 00:03:28.566 ], 00:03:28.566 "driver_specific": {} 00:03:28.566 }, 00:03:28.566 { 00:03:28.566 "name": "Passthru0", 00:03:28.566 "aliases": [ 00:03:28.566 "994fcd90-e33d-5395-bd50-9ab044b44132" 00:03:28.566 ], 00:03:28.566 "product_name": "passthru", 00:03:28.566 "block_size": 512, 00:03:28.566 "num_blocks": 16384, 00:03:28.566 "uuid": "994fcd90-e33d-5395-bd50-9ab044b44132", 00:03:28.566 "assigned_rate_limits": { 00:03:28.566 "rw_ios_per_sec": 0, 00:03:28.566 "rw_mbytes_per_sec": 0, 00:03:28.566 "r_mbytes_per_sec": 0, 00:03:28.566 "w_mbytes_per_sec": 0 00:03:28.566 }, 00:03:28.566 "claimed": false, 00:03:28.566 "zoned": false, 00:03:28.566 "supported_io_types": { 00:03:28.566 "read": true, 00:03:28.566 "write": true, 00:03:28.566 "unmap": true, 00:03:28.566 "flush": true, 00:03:28.566 "reset": true, 00:03:28.566 "nvme_admin": false, 00:03:28.566 "nvme_io": false, 00:03:28.566 "nvme_io_md": false, 00:03:28.566 "write_zeroes": true, 00:03:28.566 "zcopy": true, 00:03:28.566 "get_zone_info": false, 00:03:28.566 "zone_management": false, 00:03:28.566 "zone_append": false, 00:03:28.566 "compare": false, 00:03:28.566 "compare_and_write": false, 00:03:28.566 "abort": true, 00:03:28.566 "seek_hole": false, 00:03:28.566 "seek_data": false, 00:03:28.566 "copy": true, 00:03:28.566 "nvme_iov_md": false 00:03:28.566 }, 00:03:28.566 "memory_domains": [ 00:03:28.566 { 00:03:28.566 "dma_device_id": "system", 00:03:28.566 "dma_device_type": 1 00:03:28.566 }, 00:03:28.566 { 00:03:28.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.566 "dma_device_type": 2 00:03:28.566 } 00:03:28.566 ], 00:03:28.566 "driver_specific": { 00:03:28.566 "passthru": { 00:03:28.566 "name": "Passthru0", 00:03:28.566 "base_bdev_name": "Malloc0" 00:03:28.566 } 00:03:28.566 } 00:03:28.566 } 00:03:28.566 ]' 00:03:28.566 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:28.566 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:28.566 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:28.566 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.566 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.566 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.566 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:28.566 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.566 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.566 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.566 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:28.566 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.566 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.827 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.827 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:28.827 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:28.827 14:28:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:28.827 00:03:28.827 real 0m0.246s 00:03:28.827 user 0m0.133s 00:03:28.827 sys 0m0.026s 00:03:28.827 ************************************ 00:03:28.827 END TEST rpc_integrity 00:03:28.827 ************************************ 00:03:28.827 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:28.827 14:28:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.827 14:28:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:28.827 14:28:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:28.827 14:28:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:28.827 14:28:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.827 ************************************ 00:03:28.827 START TEST rpc_plugins 00:03:28.827 ************************************ 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:28.827 14:28:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.827 14:28:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:28.827 14:28:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.827 14:28:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:28.827 { 00:03:28.827 "name": "Malloc1", 00:03:28.827 "aliases": [ 00:03:28.827 "36adc9ac-ea91-42ea-8e7c-73a400098a07" 00:03:28.827 ], 00:03:28.827 "product_name": "Malloc disk", 00:03:28.827 "block_size": 4096, 00:03:28.827 "num_blocks": 256, 00:03:28.827 "uuid": "36adc9ac-ea91-42ea-8e7c-73a400098a07", 00:03:28.827 "assigned_rate_limits": { 00:03:28.827 "rw_ios_per_sec": 0, 00:03:28.827 "rw_mbytes_per_sec": 0, 00:03:28.827 "r_mbytes_per_sec": 0, 00:03:28.827 "w_mbytes_per_sec": 0 00:03:28.827 }, 00:03:28.827 "claimed": false, 00:03:28.827 "zoned": false, 00:03:28.827 "supported_io_types": { 00:03:28.827 "read": true, 00:03:28.827 "write": true, 00:03:28.827 "unmap": true, 00:03:28.827 "flush": true, 00:03:28.827 "reset": true, 00:03:28.827 "nvme_admin": false, 00:03:28.827 "nvme_io": false, 00:03:28.827 "nvme_io_md": false, 00:03:28.827 "write_zeroes": true, 00:03:28.827 "zcopy": true, 00:03:28.827 "get_zone_info": false, 00:03:28.827 "zone_management": false, 00:03:28.827 "zone_append": false, 00:03:28.827 "compare": false, 00:03:28.827 "compare_and_write": false, 00:03:28.827 "abort": true, 00:03:28.827 "seek_hole": false, 00:03:28.827 "seek_data": false, 00:03:28.827 "copy": true, 00:03:28.827 "nvme_iov_md": false 00:03:28.827 }, 00:03:28.827 "memory_domains": [ 00:03:28.827 { 00:03:28.827 "dma_device_id": "system", 00:03:28.827 "dma_device_type": 1 00:03:28.827 }, 00:03:28.827 { 00:03:28.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.827 "dma_device_type": 2 00:03:28.827 } 00:03:28.827 ], 00:03:28.827 "driver_specific": {} 00:03:28.827 } 00:03:28.827 ]' 00:03:28.827 14:28:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:28.827 14:28:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:28.827 14:28:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.827 14:28:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.827 14:28:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:28.827 14:28:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:28.827 14:28:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:28.827 00:03:28.827 real 0m0.114s 00:03:28.827 user 0m0.058s 00:03:28.827 sys 0m0.022s 00:03:28.827 ************************************ 00:03:28.827 END TEST rpc_plugins 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:28.827 14:28:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:28.827 ************************************ 00:03:28.827 14:28:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:28.827 14:28:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:28.827 14:28:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:28.827 14:28:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.827 ************************************ 00:03:28.827 START TEST rpc_trace_cmd_test 00:03:28.827 ************************************ 00:03:28.827 14:28:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:28.827 14:28:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:28.827 14:28:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:28.827 14:28:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.827 14:28:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:29.087 14:28:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:29.087 14:28:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:29.087 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56112", 00:03:29.087 "tpoint_group_mask": "0x8", 00:03:29.087 "iscsi_conn": { 00:03:29.087 "mask": "0x2", 00:03:29.087 "tpoint_mask": "0x0" 00:03:29.087 }, 00:03:29.087 "scsi": { 00:03:29.087 "mask": "0x4", 00:03:29.087 "tpoint_mask": "0x0" 00:03:29.087 }, 00:03:29.087 "bdev": { 00:03:29.087 "mask": "0x8", 00:03:29.087 "tpoint_mask": "0xffffffffffffffff" 00:03:29.087 }, 00:03:29.087 "nvmf_rdma": { 00:03:29.088 "mask": "0x10", 00:03:29.088 "tpoint_mask": "0x0" 00:03:29.088 }, 00:03:29.088 "nvmf_tcp": { 00:03:29.088 "mask": "0x20", 00:03:29.088 "tpoint_mask": "0x0" 00:03:29.088 }, 00:03:29.088 "ftl": { 00:03:29.088 "mask": "0x40", 00:03:29.088 "tpoint_mask": "0x0" 00:03:29.088 }, 00:03:29.088 "blobfs": { 00:03:29.088 "mask": "0x80", 00:03:29.088 "tpoint_mask": "0x0" 00:03:29.088 }, 00:03:29.088 "dsa": { 00:03:29.088 "mask": "0x200", 00:03:29.088 "tpoint_mask": "0x0" 00:03:29.088 }, 00:03:29.088 "thread": { 00:03:29.088 "mask": "0x400", 00:03:29.088 "tpoint_mask": "0x0" 00:03:29.088 }, 00:03:29.088 "nvme_pcie": { 00:03:29.088 "mask": "0x800", 00:03:29.088 "tpoint_mask": "0x0" 00:03:29.088 }, 00:03:29.088 "iaa": { 00:03:29.088 "mask": "0x1000", 00:03:29.088 "tpoint_mask": "0x0" 00:03:29.088 }, 00:03:29.088 "nvme_tcp": { 00:03:29.088 "mask": "0x2000", 00:03:29.088 "tpoint_mask": "0x0" 00:03:29.088 }, 00:03:29.088 "bdev_nvme": { 00:03:29.088 "mask": "0x4000", 00:03:29.088 "tpoint_mask": "0x0" 00:03:29.088 }, 00:03:29.088 "sock": { 00:03:29.088 "mask": "0x8000", 00:03:29.088 "tpoint_mask": "0x0" 00:03:29.088 }, 00:03:29.088 "blob": { 00:03:29.088 "mask": "0x10000", 00:03:29.088 "tpoint_mask": "0x0" 00:03:29.088 }, 00:03:29.088 "bdev_raid": { 00:03:29.088 "mask": "0x20000", 00:03:29.088 "tpoint_mask": "0x0" 00:03:29.088 } 00:03:29.088 }' 00:03:29.088 14:28:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:29.088 14:28:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:03:29.088 14:28:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:29.088 14:28:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:29.088 14:28:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:29.088 14:28:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:29.088 14:28:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:29.088 14:28:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:29.088 14:28:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:29.088 14:28:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:29.088 00:03:29.088 real 0m0.182s 00:03:29.088 user 0m0.145s 00:03:29.088 sys 0m0.023s 00:03:29.088 14:28:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:29.088 14:28:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:29.088 ************************************ 00:03:29.088 END TEST rpc_trace_cmd_test 00:03:29.088 ************************************ 00:03:29.088 14:28:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:29.088 14:28:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:29.088 14:28:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:29.088 14:28:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:29.088 14:28:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:29.088 14:28:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.088 ************************************ 00:03:29.088 START TEST rpc_daemon_integrity 00:03:29.088 ************************************ 00:03:29.088 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:29.088 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:29.088 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:29.088 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.088 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:29.088 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:29.088 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:29.350 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:29.350 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:29.350 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:29.350 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.350 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:29.350 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:29.350 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:29.350 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:29.350 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.350 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:29.350 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:29.350 { 00:03:29.350 "name": "Malloc2", 00:03:29.350 "aliases": [ 00:03:29.350 "bf3cfb77-ae28-4103-8d50-78282de9dfa6" 00:03:29.350 ], 00:03:29.350 "product_name": "Malloc disk", 00:03:29.350 "block_size": 512, 00:03:29.350 "num_blocks": 16384, 00:03:29.350 "uuid": "bf3cfb77-ae28-4103-8d50-78282de9dfa6", 00:03:29.350 "assigned_rate_limits": { 00:03:29.350 "rw_ios_per_sec": 0, 00:03:29.350 "rw_mbytes_per_sec": 0, 00:03:29.350 "r_mbytes_per_sec": 0, 00:03:29.350 "w_mbytes_per_sec": 0 00:03:29.350 }, 00:03:29.350 "claimed": false, 00:03:29.350 "zoned": false, 00:03:29.350 "supported_io_types": { 00:03:29.350 "read": true, 00:03:29.350 "write": true, 00:03:29.350 "unmap": true, 00:03:29.350 "flush": true, 00:03:29.350 "reset": true, 00:03:29.350 "nvme_admin": false, 00:03:29.350 "nvme_io": false, 00:03:29.350 "nvme_io_md": false, 00:03:29.350 "write_zeroes": true, 00:03:29.350 "zcopy": true, 00:03:29.350 "get_zone_info": false, 00:03:29.350 "zone_management": false, 00:03:29.350 "zone_append": false, 00:03:29.350 "compare": false, 00:03:29.350 "compare_and_write": false, 00:03:29.350 "abort": true, 00:03:29.350 "seek_hole": false, 00:03:29.350 "seek_data": false, 00:03:29.350 "copy": true, 00:03:29.350 "nvme_iov_md": false 00:03:29.350 }, 00:03:29.350 "memory_domains": [ 00:03:29.350 { 00:03:29.351 "dma_device_id": "system", 00:03:29.351 "dma_device_type": 1 00:03:29.351 }, 00:03:29.351 { 00:03:29.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.351 "dma_device_type": 2 00:03:29.351 } 00:03:29.351 ], 00:03:29.351 "driver_specific": {} 00:03:29.351 } 00:03:29.351 ]' 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.351 [2024-10-01 14:28:20.846539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:29.351 [2024-10-01 14:28:20.846590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:29.351 [2024-10-01 14:28:20.846612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:03:29.351 [2024-10-01 14:28:20.846624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:29.351 [2024-10-01 14:28:20.848771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:29.351 [2024-10-01 14:28:20.848801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:29.351 Passthru0 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:29.351 { 00:03:29.351 "name": "Malloc2", 00:03:29.351 "aliases": [ 00:03:29.351 "bf3cfb77-ae28-4103-8d50-78282de9dfa6" 00:03:29.351 ], 00:03:29.351 "product_name": "Malloc disk", 00:03:29.351 "block_size": 512, 00:03:29.351 "num_blocks": 16384, 00:03:29.351 "uuid": "bf3cfb77-ae28-4103-8d50-78282de9dfa6", 00:03:29.351 "assigned_rate_limits": { 00:03:29.351 "rw_ios_per_sec": 0, 00:03:29.351 "rw_mbytes_per_sec": 0, 00:03:29.351 "r_mbytes_per_sec": 0, 00:03:29.351 "w_mbytes_per_sec": 0 00:03:29.351 }, 00:03:29.351 "claimed": true, 00:03:29.351 "claim_type": "exclusive_write", 00:03:29.351 "zoned": false, 00:03:29.351 "supported_io_types": { 00:03:29.351 "read": true, 00:03:29.351 "write": true, 00:03:29.351 "unmap": true, 00:03:29.351 "flush": true, 00:03:29.351 "reset": true, 00:03:29.351 "nvme_admin": false, 00:03:29.351 "nvme_io": false, 00:03:29.351 "nvme_io_md": false, 00:03:29.351 "write_zeroes": true, 00:03:29.351 "zcopy": true, 00:03:29.351 "get_zone_info": false, 00:03:29.351 "zone_management": false, 00:03:29.351 "zone_append": false, 00:03:29.351 "compare": false, 00:03:29.351 "compare_and_write": false, 00:03:29.351 "abort": true, 00:03:29.351 "seek_hole": false, 00:03:29.351 "seek_data": false, 00:03:29.351 "copy": true, 00:03:29.351 "nvme_iov_md": false 00:03:29.351 }, 00:03:29.351 "memory_domains": [ 00:03:29.351 { 00:03:29.351 "dma_device_id": "system", 00:03:29.351 "dma_device_type": 1 00:03:29.351 }, 00:03:29.351 { 00:03:29.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.351 "dma_device_type": 2 00:03:29.351 } 00:03:29.351 ], 00:03:29.351 "driver_specific": {} 00:03:29.351 }, 00:03:29.351 { 00:03:29.351 "name": "Passthru0", 00:03:29.351 "aliases": [ 00:03:29.351 "21a4b60c-a46f-5d77-9448-cd1bc1d5120f" 00:03:29.351 ], 00:03:29.351 "product_name": "passthru", 00:03:29.351 "block_size": 512, 00:03:29.351 "num_blocks": 16384, 00:03:29.351 "uuid": "21a4b60c-a46f-5d77-9448-cd1bc1d5120f", 00:03:29.351 "assigned_rate_limits": { 00:03:29.351 "rw_ios_per_sec": 0, 00:03:29.351 "rw_mbytes_per_sec": 0, 00:03:29.351 "r_mbytes_per_sec": 0, 00:03:29.351 "w_mbytes_per_sec": 0 00:03:29.351 }, 00:03:29.351 "claimed": false, 00:03:29.351 "zoned": false, 00:03:29.351 "supported_io_types": { 00:03:29.351 "read": true, 00:03:29.351 "write": true, 00:03:29.351 "unmap": true, 00:03:29.351 "flush": true, 00:03:29.351 "reset": true, 00:03:29.351 "nvme_admin": false, 00:03:29.351 "nvme_io": false, 00:03:29.351 "nvme_io_md": false, 00:03:29.351 "write_zeroes": true, 00:03:29.351 "zcopy": true, 00:03:29.351 "get_zone_info": false, 00:03:29.351 "zone_management": false, 00:03:29.351 "zone_append": false, 00:03:29.351 "compare": false, 00:03:29.351 "compare_and_write": false, 00:03:29.351 "abort": true, 00:03:29.351 "seek_hole": false, 00:03:29.351 "seek_data": false, 00:03:29.351 "copy": true, 00:03:29.351 "nvme_iov_md": false 00:03:29.351 }, 00:03:29.351 "memory_domains": [ 00:03:29.351 { 00:03:29.351 "dma_device_id": "system", 00:03:29.351 "dma_device_type": 1 00:03:29.351 }, 00:03:29.351 { 00:03:29.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.351 "dma_device_type": 2 00:03:29.351 } 00:03:29.351 ], 00:03:29.351 "driver_specific": { 00:03:29.351 "passthru": { 00:03:29.351 "name": "Passthru0", 00:03:29.351 "base_bdev_name": "Malloc2" 00:03:29.351 } 00:03:29.351 } 00:03:29.351 } 00:03:29.351 ]' 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:29.351 00:03:29.351 real 0m0.244s 00:03:29.351 user 0m0.123s 00:03:29.351 sys 0m0.038s 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:29.351 ************************************ 00:03:29.351 END TEST rpc_daemon_integrity 00:03:29.351 ************************************ 00:03:29.351 14:28:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.351 14:28:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:29.351 14:28:21 rpc -- rpc/rpc.sh@84 -- # killprocess 56112 00:03:29.351 14:28:21 rpc -- common/autotest_common.sh@950 -- # '[' -z 56112 ']' 00:03:29.351 14:28:21 rpc -- common/autotest_common.sh@954 -- # kill -0 56112 00:03:29.351 14:28:21 rpc -- common/autotest_common.sh@955 -- # uname 00:03:29.351 14:28:21 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:29.351 14:28:21 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56112 00:03:29.620 14:28:21 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:29.620 killing process with pid 56112 00:03:29.620 14:28:21 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:29.620 14:28:21 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56112' 00:03:29.620 14:28:21 rpc -- common/autotest_common.sh@969 -- # kill 56112 00:03:29.620 14:28:21 rpc -- common/autotest_common.sh@974 -- # wait 56112 00:03:31.002 00:03:31.002 real 0m3.765s 00:03:31.002 user 0m4.189s 00:03:31.002 sys 0m0.602s 00:03:31.002 14:28:22 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.002 14:28:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.002 ************************************ 00:03:31.002 END TEST rpc 00:03:31.002 ************************************ 00:03:31.002 14:28:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:31.002 14:28:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.002 14:28:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.002 14:28:22 -- common/autotest_common.sh@10 -- # set +x 00:03:31.263 ************************************ 00:03:31.263 START TEST skip_rpc 00:03:31.263 ************************************ 00:03:31.263 14:28:22 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:31.263 * Looking for test storage... 00:03:31.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:31.263 14:28:22 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:31.263 14:28:22 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:31.263 14:28:22 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:31.263 14:28:22 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.263 14:28:22 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:31.263 14:28:22 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.263 14:28:22 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:31.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.263 --rc genhtml_branch_coverage=1 00:03:31.263 --rc genhtml_function_coverage=1 00:03:31.263 --rc genhtml_legend=1 00:03:31.263 --rc geninfo_all_blocks=1 00:03:31.263 --rc geninfo_unexecuted_blocks=1 00:03:31.263 00:03:31.263 ' 00:03:31.263 14:28:22 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:31.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.263 --rc genhtml_branch_coverage=1 00:03:31.263 --rc genhtml_function_coverage=1 00:03:31.263 --rc genhtml_legend=1 00:03:31.263 --rc geninfo_all_blocks=1 00:03:31.263 --rc geninfo_unexecuted_blocks=1 00:03:31.263 00:03:31.263 ' 00:03:31.263 14:28:22 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:31.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.263 --rc genhtml_branch_coverage=1 00:03:31.263 --rc genhtml_function_coverage=1 00:03:31.263 --rc genhtml_legend=1 00:03:31.263 --rc geninfo_all_blocks=1 00:03:31.263 --rc geninfo_unexecuted_blocks=1 00:03:31.263 00:03:31.263 ' 00:03:31.263 14:28:22 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:31.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.263 --rc genhtml_branch_coverage=1 00:03:31.263 --rc genhtml_function_coverage=1 00:03:31.263 --rc genhtml_legend=1 00:03:31.263 --rc geninfo_all_blocks=1 00:03:31.263 --rc geninfo_unexecuted_blocks=1 00:03:31.263 00:03:31.263 ' 00:03:31.263 14:28:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:31.263 14:28:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:31.263 14:28:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:31.263 14:28:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.263 14:28:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.263 14:28:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.263 ************************************ 00:03:31.263 START TEST skip_rpc 00:03:31.263 ************************************ 00:03:31.263 14:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:31.263 14:28:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56325 00:03:31.263 14:28:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:31.263 14:28:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:31.263 14:28:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:31.263 [2024-10-01 14:28:22.932847] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:03:31.263 [2024-10-01 14:28:22.932957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56325 ] 00:03:31.523 [2024-10-01 14:28:23.080253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.783 [2024-10-01 14:28:23.326979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56325 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 56325 ']' 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 56325 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56325 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:37.112 killing process with pid 56325 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56325' 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 56325 00:03:37.112 14:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 56325 00:03:38.054 00:03:38.054 real 0m6.663s 00:03:38.054 user 0m6.287s 00:03:38.054 sys 0m0.273s 00:03:38.054 14:28:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.054 ************************************ 00:03:38.054 END TEST skip_rpc 00:03:38.054 ************************************ 00:03:38.054 14:28:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.054 14:28:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:38.054 14:28:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.054 14:28:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.054 14:28:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.054 ************************************ 00:03:38.054 START TEST skip_rpc_with_json 00:03:38.054 ************************************ 00:03:38.054 14:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:38.054 14:28:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:38.054 14:28:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56423 00:03:38.054 14:28:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:38.054 14:28:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56423 00:03:38.054 14:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 56423 ']' 00:03:38.054 14:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:38.054 14:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:38.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:38.054 14:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:38.054 14:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:38.054 14:28:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:38.054 14:28:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:38.054 [2024-10-01 14:28:29.655688] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:03:38.054 [2024-10-01 14:28:29.655821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56423 ] 00:03:38.316 [2024-10-01 14:28:29.801292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.316 [2024-10-01 14:28:29.993208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.258 [2024-10-01 14:28:30.584108] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:39.258 request: 00:03:39.258 { 00:03:39.258 "trtype": "tcp", 00:03:39.258 "method": "nvmf_get_transports", 00:03:39.258 "req_id": 1 00:03:39.258 } 00:03:39.258 Got JSON-RPC error response 00:03:39.258 response: 00:03:39.258 { 00:03:39.258 "code": -19, 00:03:39.258 "message": "No such device" 00:03:39.258 } 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.258 [2024-10-01 14:28:30.596209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:39.258 14:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:39.258 { 00:03:39.258 "subsystems": [ 00:03:39.258 { 00:03:39.258 "subsystem": "fsdev", 00:03:39.258 "config": [ 00:03:39.258 { 00:03:39.258 "method": "fsdev_set_opts", 00:03:39.258 "params": { 00:03:39.258 "fsdev_io_pool_size": 65535, 00:03:39.258 "fsdev_io_cache_size": 256 00:03:39.258 } 00:03:39.258 } 00:03:39.258 ] 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "subsystem": "keyring", 00:03:39.258 "config": [] 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "subsystem": "iobuf", 00:03:39.258 "config": [ 00:03:39.258 { 00:03:39.258 "method": "iobuf_set_options", 00:03:39.258 "params": { 00:03:39.258 "small_pool_count": 8192, 00:03:39.258 "large_pool_count": 1024, 00:03:39.258 "small_bufsize": 8192, 00:03:39.258 "large_bufsize": 135168 00:03:39.258 } 00:03:39.258 } 00:03:39.258 ] 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "subsystem": "sock", 00:03:39.258 "config": [ 00:03:39.258 { 00:03:39.258 "method": "sock_set_default_impl", 00:03:39.258 "params": { 00:03:39.258 "impl_name": "posix" 00:03:39.258 } 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "method": "sock_impl_set_options", 00:03:39.258 "params": { 00:03:39.258 "impl_name": "ssl", 00:03:39.258 "recv_buf_size": 4096, 00:03:39.258 "send_buf_size": 4096, 00:03:39.258 "enable_recv_pipe": true, 00:03:39.258 "enable_quickack": false, 00:03:39.258 "enable_placement_id": 0, 00:03:39.258 "enable_zerocopy_send_server": true, 00:03:39.258 "enable_zerocopy_send_client": false, 00:03:39.258 "zerocopy_threshold": 0, 00:03:39.258 "tls_version": 0, 00:03:39.258 "enable_ktls": false 00:03:39.258 } 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "method": "sock_impl_set_options", 00:03:39.258 "params": { 00:03:39.258 "impl_name": "posix", 00:03:39.258 "recv_buf_size": 2097152, 00:03:39.258 "send_buf_size": 2097152, 00:03:39.258 "enable_recv_pipe": true, 00:03:39.258 "enable_quickack": false, 00:03:39.258 "enable_placement_id": 0, 00:03:39.258 "enable_zerocopy_send_server": true, 00:03:39.258 "enable_zerocopy_send_client": false, 00:03:39.258 "zerocopy_threshold": 0, 00:03:39.258 "tls_version": 0, 00:03:39.258 "enable_ktls": false 00:03:39.258 } 00:03:39.258 } 00:03:39.258 ] 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "subsystem": "vmd", 00:03:39.258 "config": [] 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "subsystem": "accel", 00:03:39.258 "config": [ 00:03:39.258 { 00:03:39.258 "method": "accel_set_options", 00:03:39.258 "params": { 00:03:39.258 "small_cache_size": 128, 00:03:39.258 "large_cache_size": 16, 00:03:39.258 "task_count": 2048, 00:03:39.258 "sequence_count": 2048, 00:03:39.258 "buf_count": 2048 00:03:39.258 } 00:03:39.258 } 00:03:39.258 ] 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "subsystem": "bdev", 00:03:39.258 "config": [ 00:03:39.258 { 00:03:39.258 "method": "bdev_set_options", 00:03:39.258 "params": { 00:03:39.258 "bdev_io_pool_size": 65535, 00:03:39.258 "bdev_io_cache_size": 256, 00:03:39.258 "bdev_auto_examine": true, 00:03:39.258 "iobuf_small_cache_size": 128, 00:03:39.258 "iobuf_large_cache_size": 16 00:03:39.258 } 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "method": "bdev_raid_set_options", 00:03:39.258 "params": { 00:03:39.258 "process_window_size_kb": 1024, 00:03:39.258 "process_max_bandwidth_mb_sec": 0 00:03:39.258 } 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "method": "bdev_iscsi_set_options", 00:03:39.258 "params": { 00:03:39.258 "timeout_sec": 30 00:03:39.258 } 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "method": "bdev_nvme_set_options", 00:03:39.258 "params": { 00:03:39.258 "action_on_timeout": "none", 00:03:39.258 "timeout_us": 0, 00:03:39.258 "timeout_admin_us": 0, 00:03:39.258 "keep_alive_timeout_ms": 10000, 00:03:39.258 "arbitration_burst": 0, 00:03:39.258 "low_priority_weight": 0, 00:03:39.258 "medium_priority_weight": 0, 00:03:39.258 "high_priority_weight": 0, 00:03:39.258 "nvme_adminq_poll_period_us": 10000, 00:03:39.258 "nvme_ioq_poll_period_us": 0, 00:03:39.258 "io_queue_requests": 0, 00:03:39.258 "delay_cmd_submit": true, 00:03:39.258 "transport_retry_count": 4, 00:03:39.258 "bdev_retry_count": 3, 00:03:39.258 "transport_ack_timeout": 0, 00:03:39.258 "ctrlr_loss_timeout_sec": 0, 00:03:39.258 "reconnect_delay_sec": 0, 00:03:39.258 "fast_io_fail_timeout_sec": 0, 00:03:39.258 "disable_auto_failback": false, 00:03:39.258 "generate_uuids": false, 00:03:39.258 "transport_tos": 0, 00:03:39.258 "nvme_error_stat": false, 00:03:39.258 "rdma_srq_size": 0, 00:03:39.258 "io_path_stat": false, 00:03:39.258 "allow_accel_sequence": false, 00:03:39.258 "rdma_max_cq_size": 0, 00:03:39.258 "rdma_cm_event_timeout_ms": 0, 00:03:39.258 "dhchap_digests": [ 00:03:39.258 "sha256", 00:03:39.258 "sha384", 00:03:39.258 "sha512" 00:03:39.258 ], 00:03:39.258 "dhchap_dhgroups": [ 00:03:39.258 "null", 00:03:39.258 "ffdhe2048", 00:03:39.258 "ffdhe3072", 00:03:39.258 "ffdhe4096", 00:03:39.258 "ffdhe6144", 00:03:39.258 "ffdhe8192" 00:03:39.258 ] 00:03:39.258 } 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "method": "bdev_nvme_set_hotplug", 00:03:39.258 "params": { 00:03:39.258 "period_us": 100000, 00:03:39.258 "enable": false 00:03:39.258 } 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "method": "bdev_wait_for_examine" 00:03:39.258 } 00:03:39.258 ] 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "subsystem": "scsi", 00:03:39.258 "config": null 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "subsystem": "scheduler", 00:03:39.258 "config": [ 00:03:39.258 { 00:03:39.258 "method": "framework_set_scheduler", 00:03:39.258 "params": { 00:03:39.258 "name": "static" 00:03:39.258 } 00:03:39.258 } 00:03:39.258 ] 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "subsystem": "vhost_scsi", 00:03:39.258 "config": [] 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "subsystem": "vhost_blk", 00:03:39.258 "config": [] 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "subsystem": "ublk", 00:03:39.258 "config": [] 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "subsystem": "nbd", 00:03:39.258 "config": [] 00:03:39.258 }, 00:03:39.258 { 00:03:39.258 "subsystem": "nvmf", 00:03:39.258 "config": [ 00:03:39.258 { 00:03:39.258 "method": "nvmf_set_config", 00:03:39.258 "params": { 00:03:39.258 "discovery_filter": "match_any", 00:03:39.258 "admin_cmd_passthru": { 00:03:39.258 "identify_ctrlr": false 00:03:39.258 }, 00:03:39.259 "dhchap_digests": [ 00:03:39.259 "sha256", 00:03:39.259 "sha384", 00:03:39.259 "sha512" 00:03:39.259 ], 00:03:39.259 "dhchap_dhgroups": [ 00:03:39.259 "null", 00:03:39.259 "ffdhe2048", 00:03:39.259 "ffdhe3072", 00:03:39.259 "ffdhe4096", 00:03:39.259 "ffdhe6144", 00:03:39.259 "ffdhe8192" 00:03:39.259 ] 00:03:39.259 } 00:03:39.259 }, 00:03:39.259 { 00:03:39.259 "method": "nvmf_set_max_subsystems", 00:03:39.259 "params": { 00:03:39.259 "max_subsystems": 1024 00:03:39.259 } 00:03:39.259 }, 00:03:39.259 { 00:03:39.259 "method": "nvmf_set_crdt", 00:03:39.259 "params": { 00:03:39.259 "crdt1": 0, 00:03:39.259 "crdt2": 0, 00:03:39.259 "crdt3": 0 00:03:39.259 } 00:03:39.259 }, 00:03:39.259 { 00:03:39.259 "method": "nvmf_create_transport", 00:03:39.259 "params": { 00:03:39.259 "trtype": "TCP", 00:03:39.259 "max_queue_depth": 128, 00:03:39.259 "max_io_qpairs_per_ctrlr": 127, 00:03:39.259 "in_capsule_data_size": 4096, 00:03:39.259 "max_io_size": 131072, 00:03:39.259 "io_unit_size": 131072, 00:03:39.259 "max_aq_depth": 128, 00:03:39.259 "num_shared_buffers": 511, 00:03:39.259 "buf_cache_size": 4294967295, 00:03:39.259 "dif_insert_or_strip": false, 00:03:39.259 "zcopy": false, 00:03:39.259 "c2h_success": true, 00:03:39.259 "sock_priority": 0, 00:03:39.259 "abort_timeout_sec": 1, 00:03:39.259 "ack_timeout": 0, 00:03:39.259 "data_wr_pool_size": 0 00:03:39.259 } 00:03:39.259 } 00:03:39.259 ] 00:03:39.259 }, 00:03:39.259 { 00:03:39.259 "subsystem": "iscsi", 00:03:39.259 "config": [ 00:03:39.259 { 00:03:39.259 "method": "iscsi_set_options", 00:03:39.259 "params": { 00:03:39.259 "node_base": "iqn.2016-06.io.spdk", 00:03:39.259 "max_sessions": 128, 00:03:39.259 "max_connections_per_session": 2, 00:03:39.259 "max_queue_depth": 64, 00:03:39.259 "default_time2wait": 2, 00:03:39.259 "default_time2retain": 20, 00:03:39.259 "first_burst_length": 8192, 00:03:39.259 "immediate_data": true, 00:03:39.259 "allow_duplicated_isid": false, 00:03:39.259 "error_recovery_level": 0, 00:03:39.259 "nop_timeout": 60, 00:03:39.259 "nop_in_interval": 30, 00:03:39.259 "disable_chap": false, 00:03:39.259 "require_chap": false, 00:03:39.259 "mutual_chap": false, 00:03:39.259 "chap_group": 0, 00:03:39.259 "max_large_datain_per_connection": 64, 00:03:39.259 "max_r2t_per_connection": 4, 00:03:39.259 "pdu_pool_size": 36864, 00:03:39.259 "immediate_data_pool_size": 16384, 00:03:39.259 "data_out_pool_size": 2048 00:03:39.259 } 00:03:39.259 } 00:03:39.259 ] 00:03:39.259 } 00:03:39.259 ] 00:03:39.259 } 00:03:39.259 14:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:39.259 14:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56423 00:03:39.259 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 56423 ']' 00:03:39.259 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 56423 00:03:39.259 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:39.259 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:39.259 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56423 00:03:39.259 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:39.259 killing process with pid 56423 00:03:39.259 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:39.259 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56423' 00:03:39.259 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 56423 00:03:39.259 14:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 56423 00:03:41.172 14:28:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56468 00:03:41.172 14:28:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:41.172 14:28:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:46.463 14:28:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56468 00:03:46.463 14:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 56468 ']' 00:03:46.463 14:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 56468 00:03:46.463 14:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:46.463 14:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:46.463 14:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56468 00:03:46.463 killing process with pid 56468 00:03:46.463 14:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:46.463 14:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:46.463 14:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56468' 00:03:46.463 14:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 56468 00:03:46.463 14:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 56468 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:47.854 00:03:47.854 real 0m9.614s 00:03:47.854 user 0m9.159s 00:03:47.854 sys 0m0.671s 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.854 ************************************ 00:03:47.854 END TEST skip_rpc_with_json 00:03:47.854 ************************************ 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.854 14:28:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:47.854 14:28:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.854 14:28:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.854 14:28:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.854 ************************************ 00:03:47.854 START TEST skip_rpc_with_delay 00:03:47.854 ************************************ 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.854 [2024-10-01 14:28:39.375861] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:47.854 [2024-10-01 14:28:39.376058] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:47.854 00:03:47.854 real 0m0.162s 00:03:47.854 user 0m0.084s 00:03:47.854 sys 0m0.075s 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.854 ************************************ 00:03:47.854 END TEST skip_rpc_with_delay 00:03:47.854 ************************************ 00:03:47.854 14:28:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:47.854 14:28:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:47.854 14:28:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:47.854 14:28:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:47.854 14:28:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.854 14:28:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.854 14:28:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.854 ************************************ 00:03:47.854 START TEST exit_on_failed_rpc_init 00:03:47.854 ************************************ 00:03:47.855 14:28:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:03:47.855 14:28:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56595 00:03:47.855 14:28:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56595 00:03:47.855 14:28:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 56595 ']' 00:03:47.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.855 14:28:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.855 14:28:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:47.855 14:28:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.855 14:28:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:47.855 14:28:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:47.855 14:28:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:48.117 [2024-10-01 14:28:39.599123] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:03:48.117 [2024-10-01 14:28:39.599274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56595 ] 00:03:48.117 [2024-10-01 14:28:39.754252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.464 [2024-10-01 14:28:40.000105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:03:49.037 14:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:03:49.301 [2024-10-01 14:28:40.806613] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:03:49.301 [2024-10-01 14:28:40.806803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56614 ] 00:03:49.301 [2024-10-01 14:28:40.963094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.876 [2024-10-01 14:28:41.267603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:03:49.876 [2024-10-01 14:28:41.267783] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:49.876 [2024-10-01 14:28:41.267801] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:49.876 [2024-10-01 14:28:41.267816] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56595 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 56595 ']' 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 56595 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56595 00:03:50.138 killing process with pid 56595 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56595' 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 56595 00:03:50.138 14:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 56595 00:03:52.055 ************************************ 00:03:52.056 END TEST exit_on_failed_rpc_init 00:03:52.056 ************************************ 00:03:52.056 00:03:52.056 real 0m3.959s 00:03:52.056 user 0m4.471s 00:03:52.056 sys 0m0.632s 00:03:52.056 14:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.056 14:28:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:52.056 14:28:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:52.056 ************************************ 00:03:52.056 END TEST skip_rpc 00:03:52.056 ************************************ 00:03:52.056 00:03:52.056 real 0m20.845s 00:03:52.056 user 0m20.167s 00:03:52.056 sys 0m1.840s 00:03:52.056 14:28:43 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.056 14:28:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.056 14:28:43 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:52.056 14:28:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.056 14:28:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.056 14:28:43 -- common/autotest_common.sh@10 -- # set +x 00:03:52.056 ************************************ 00:03:52.056 START TEST rpc_client 00:03:52.056 ************************************ 00:03:52.056 14:28:43 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:52.056 * Looking for test storage... 00:03:52.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:03:52.056 14:28:43 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:52.056 14:28:43 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:03:52.056 14:28:43 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:52.319 14:28:43 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.319 14:28:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:52.319 14:28:43 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.319 14:28:43 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:52.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.319 --rc genhtml_branch_coverage=1 00:03:52.319 --rc genhtml_function_coverage=1 00:03:52.319 --rc genhtml_legend=1 00:03:52.319 --rc geninfo_all_blocks=1 00:03:52.319 --rc geninfo_unexecuted_blocks=1 00:03:52.319 00:03:52.319 ' 00:03:52.319 14:28:43 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:52.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.319 --rc genhtml_branch_coverage=1 00:03:52.319 --rc genhtml_function_coverage=1 00:03:52.319 --rc genhtml_legend=1 00:03:52.319 --rc geninfo_all_blocks=1 00:03:52.319 --rc geninfo_unexecuted_blocks=1 00:03:52.319 00:03:52.319 ' 00:03:52.319 14:28:43 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:52.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.319 --rc genhtml_branch_coverage=1 00:03:52.319 --rc genhtml_function_coverage=1 00:03:52.319 --rc genhtml_legend=1 00:03:52.319 --rc geninfo_all_blocks=1 00:03:52.319 --rc geninfo_unexecuted_blocks=1 00:03:52.319 00:03:52.319 ' 00:03:52.319 14:28:43 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:52.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.319 --rc genhtml_branch_coverage=1 00:03:52.319 --rc genhtml_function_coverage=1 00:03:52.319 --rc genhtml_legend=1 00:03:52.319 --rc geninfo_all_blocks=1 00:03:52.319 --rc geninfo_unexecuted_blocks=1 00:03:52.319 00:03:52.319 ' 00:03:52.319 14:28:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:03:52.319 OK 00:03:52.319 14:28:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:52.319 00:03:52.319 real 0m0.217s 00:03:52.319 user 0m0.108s 00:03:52.319 sys 0m0.101s 00:03:52.319 14:28:43 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.319 14:28:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:52.319 ************************************ 00:03:52.319 END TEST rpc_client 00:03:52.319 ************************************ 00:03:52.319 14:28:43 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:52.319 14:28:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.319 14:28:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.319 14:28:43 -- common/autotest_common.sh@10 -- # set +x 00:03:52.319 ************************************ 00:03:52.319 START TEST json_config 00:03:52.319 ************************************ 00:03:52.319 14:28:43 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:52.319 14:28:43 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:52.319 14:28:43 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:03:52.319 14:28:43 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:52.319 14:28:43 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:52.319 14:28:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.319 14:28:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.319 14:28:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.319 14:28:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.319 14:28:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.319 14:28:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.319 14:28:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.319 14:28:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.319 14:28:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.319 14:28:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.319 14:28:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.319 14:28:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:52.319 14:28:43 json_config -- scripts/common.sh@345 -- # : 1 00:03:52.319 14:28:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.319 14:28:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.319 14:28:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:52.319 14:28:43 json_config -- scripts/common.sh@353 -- # local d=1 00:03:52.319 14:28:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.319 14:28:43 json_config -- scripts/common.sh@355 -- # echo 1 00:03:52.319 14:28:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.319 14:28:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:52.319 14:28:43 json_config -- scripts/common.sh@353 -- # local d=2 00:03:52.319 14:28:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.319 14:28:43 json_config -- scripts/common.sh@355 -- # echo 2 00:03:52.584 14:28:44 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.584 14:28:44 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.584 14:28:44 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.584 14:28:44 json_config -- scripts/common.sh@368 -- # return 0 00:03:52.584 14:28:44 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.584 14:28:44 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:52.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.584 --rc genhtml_branch_coverage=1 00:03:52.584 --rc genhtml_function_coverage=1 00:03:52.584 --rc genhtml_legend=1 00:03:52.584 --rc geninfo_all_blocks=1 00:03:52.584 --rc geninfo_unexecuted_blocks=1 00:03:52.584 00:03:52.584 ' 00:03:52.584 14:28:44 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:52.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.584 --rc genhtml_branch_coverage=1 00:03:52.584 --rc genhtml_function_coverage=1 00:03:52.584 --rc genhtml_legend=1 00:03:52.584 --rc geninfo_all_blocks=1 00:03:52.584 --rc geninfo_unexecuted_blocks=1 00:03:52.584 00:03:52.584 ' 00:03:52.584 14:28:44 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:52.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.584 --rc genhtml_branch_coverage=1 00:03:52.584 --rc genhtml_function_coverage=1 00:03:52.584 --rc genhtml_legend=1 00:03:52.584 --rc geninfo_all_blocks=1 00:03:52.584 --rc geninfo_unexecuted_blocks=1 00:03:52.584 00:03:52.584 ' 00:03:52.584 14:28:44 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:52.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.584 --rc genhtml_branch_coverage=1 00:03:52.584 --rc genhtml_function_coverage=1 00:03:52.584 --rc genhtml_legend=1 00:03:52.584 --rc geninfo_all_blocks=1 00:03:52.584 --rc geninfo_unexecuted_blocks=1 00:03:52.584 00:03:52.584 ' 00:03:52.584 14:28:44 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9bd3c322-ae73-4681-b7c6-8148d6e6f90c 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9bd3c322-ae73-4681-b7c6-8148d6e6f90c 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:52.584 14:28:44 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:52.584 14:28:44 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:52.584 14:28:44 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:52.584 14:28:44 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:52.584 14:28:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.584 14:28:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.584 14:28:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.584 14:28:44 json_config -- paths/export.sh@5 -- # export PATH 00:03:52.584 14:28:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@51 -- # : 0 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:52.584 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:52.584 14:28:44 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:52.584 14:28:44 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:03:52.584 14:28:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:52.584 14:28:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:52.584 14:28:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:52.584 WARNING: No tests are enabled so not running JSON configuration tests 00:03:52.584 14:28:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:52.584 14:28:44 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:03:52.584 14:28:44 json_config -- json_config/json_config.sh@28 -- # exit 0 00:03:52.584 00:03:52.584 real 0m0.152s 00:03:52.584 user 0m0.083s 00:03:52.584 sys 0m0.069s 00:03:52.584 14:28:44 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.584 ************************************ 00:03:52.584 END TEST json_config 00:03:52.584 ************************************ 00:03:52.584 14:28:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.584 14:28:44 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:52.584 14:28:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.585 14:28:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.585 14:28:44 -- common/autotest_common.sh@10 -- # set +x 00:03:52.585 ************************************ 00:03:52.585 START TEST json_config_extra_key 00:03:52.585 ************************************ 00:03:52.585 14:28:44 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:52.585 14:28:44 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:52.585 14:28:44 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:03:52.585 14:28:44 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:52.585 14:28:44 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:52.585 14:28:44 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.585 14:28:44 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:52.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.585 --rc genhtml_branch_coverage=1 00:03:52.585 --rc genhtml_function_coverage=1 00:03:52.585 --rc genhtml_legend=1 00:03:52.585 --rc geninfo_all_blocks=1 00:03:52.585 --rc geninfo_unexecuted_blocks=1 00:03:52.585 00:03:52.585 ' 00:03:52.585 14:28:44 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:52.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.585 --rc genhtml_branch_coverage=1 00:03:52.585 --rc genhtml_function_coverage=1 00:03:52.585 --rc genhtml_legend=1 00:03:52.585 --rc geninfo_all_blocks=1 00:03:52.585 --rc geninfo_unexecuted_blocks=1 00:03:52.585 00:03:52.585 ' 00:03:52.585 14:28:44 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:52.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.585 --rc genhtml_branch_coverage=1 00:03:52.585 --rc genhtml_function_coverage=1 00:03:52.585 --rc genhtml_legend=1 00:03:52.585 --rc geninfo_all_blocks=1 00:03:52.585 --rc geninfo_unexecuted_blocks=1 00:03:52.585 00:03:52.585 ' 00:03:52.585 14:28:44 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:52.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.585 --rc genhtml_branch_coverage=1 00:03:52.585 --rc genhtml_function_coverage=1 00:03:52.585 --rc genhtml_legend=1 00:03:52.585 --rc geninfo_all_blocks=1 00:03:52.585 --rc geninfo_unexecuted_blocks=1 00:03:52.585 00:03:52.585 ' 00:03:52.585 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9bd3c322-ae73-4681-b7c6-8148d6e6f90c 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9bd3c322-ae73-4681-b7c6-8148d6e6f90c 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:52.585 14:28:44 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:52.585 14:28:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.585 14:28:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.585 14:28:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.585 14:28:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:52.585 14:28:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:52.585 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:52.585 14:28:44 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:52.585 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:03:52.585 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:52.585 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:52.585 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:52.585 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:52.585 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:52.585 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:52.585 INFO: launching applications... 00:03:52.585 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:03:52.585 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:52.585 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:52.585 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:52.585 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:52.585 14:28:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:52.585 14:28:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:52.585 14:28:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:52.586 14:28:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:52.586 14:28:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:52.586 14:28:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:52.586 14:28:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:52.586 Waiting for target to run... 00:03:52.586 14:28:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=56819 00:03:52.586 14:28:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:52.586 14:28:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 56819 /var/tmp/spdk_tgt.sock 00:03:52.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:52.586 14:28:44 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 56819 ']' 00:03:52.586 14:28:44 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:52.586 14:28:44 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:52.586 14:28:44 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:52.586 14:28:44 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:52.586 14:28:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:52.586 14:28:44 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:52.848 [2024-10-01 14:28:44.333290] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:03:52.848 [2024-10-01 14:28:44.333433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56819 ] 00:03:53.109 [2024-10-01 14:28:44.747346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.370 [2024-10-01 14:28:44.970635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.941 14:28:45 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:53.941 00:03:53.941 14:28:45 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:03:53.941 14:28:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:53.941 INFO: shutting down applications... 00:03:53.941 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:53.941 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:53.941 14:28:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:53.941 14:28:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:53.942 14:28:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 56819 ]] 00:03:53.942 14:28:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 56819 00:03:53.942 14:28:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:53.942 14:28:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:53.942 14:28:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56819 00:03:53.942 14:28:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:54.512 14:28:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:54.512 14:28:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:54.512 14:28:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56819 00:03:54.512 14:28:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:55.151 14:28:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:55.151 14:28:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:55.151 14:28:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56819 00:03:55.151 14:28:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:55.412 14:28:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:55.412 14:28:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:55.412 14:28:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56819 00:03:55.412 14:28:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:55.986 14:28:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:55.986 14:28:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:55.986 14:28:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56819 00:03:55.986 14:28:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:55.986 14:28:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:55.986 SPDK target shutdown done 00:03:55.986 14:28:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:55.986 14:28:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:55.986 Success 00:03:55.986 14:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:55.986 00:03:55.986 real 0m3.436s 00:03:55.986 user 0m3.135s 00:03:55.986 sys 0m0.567s 00:03:55.986 14:28:47 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.986 14:28:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:55.986 ************************************ 00:03:55.986 END TEST json_config_extra_key 00:03:55.986 ************************************ 00:03:55.986 14:28:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:55.986 14:28:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.986 14:28:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.986 14:28:47 -- common/autotest_common.sh@10 -- # set +x 00:03:55.986 ************************************ 00:03:55.986 START TEST alias_rpc 00:03:55.986 ************************************ 00:03:55.986 14:28:47 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:55.986 * Looking for test storage... 00:03:56.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.248 14:28:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:56.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.248 --rc genhtml_branch_coverage=1 00:03:56.248 --rc genhtml_function_coverage=1 00:03:56.248 --rc genhtml_legend=1 00:03:56.248 --rc geninfo_all_blocks=1 00:03:56.248 --rc geninfo_unexecuted_blocks=1 00:03:56.248 00:03:56.248 ' 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:56.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.248 --rc genhtml_branch_coverage=1 00:03:56.248 --rc genhtml_function_coverage=1 00:03:56.248 --rc genhtml_legend=1 00:03:56.248 --rc geninfo_all_blocks=1 00:03:56.248 --rc geninfo_unexecuted_blocks=1 00:03:56.248 00:03:56.248 ' 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:56.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.248 --rc genhtml_branch_coverage=1 00:03:56.248 --rc genhtml_function_coverage=1 00:03:56.248 --rc genhtml_legend=1 00:03:56.248 --rc geninfo_all_blocks=1 00:03:56.248 --rc geninfo_unexecuted_blocks=1 00:03:56.248 00:03:56.248 ' 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:56.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.248 --rc genhtml_branch_coverage=1 00:03:56.248 --rc genhtml_function_coverage=1 00:03:56.248 --rc genhtml_legend=1 00:03:56.248 --rc geninfo_all_blocks=1 00:03:56.248 --rc geninfo_unexecuted_blocks=1 00:03:56.248 00:03:56.248 ' 00:03:56.248 14:28:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:56.248 14:28:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56912 00:03:56.248 14:28:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56912 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 56912 ']' 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:56.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:56.248 14:28:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:56.248 14:28:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.248 [2024-10-01 14:28:47.836449] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:03:56.248 [2024-10-01 14:28:47.836583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56912 ] 00:03:56.509 [2024-10-01 14:28:47.988339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.769 [2024-10-01 14:28:48.239150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.342 14:28:48 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:57.342 14:28:48 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:03:57.342 14:28:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:03:57.601 14:28:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56912 00:03:57.601 14:28:49 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 56912 ']' 00:03:57.601 14:28:49 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 56912 00:03:57.601 14:28:49 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:03:57.601 14:28:49 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:57.601 14:28:49 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56912 00:03:57.601 killing process with pid 56912 00:03:57.601 14:28:49 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:57.601 14:28:49 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:57.601 14:28:49 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56912' 00:03:57.601 14:28:49 alias_rpc -- common/autotest_common.sh@969 -- # kill 56912 00:03:57.601 14:28:49 alias_rpc -- common/autotest_common.sh@974 -- # wait 56912 00:03:59.518 00:03:59.518 real 0m3.416s 00:03:59.518 user 0m3.400s 00:03:59.518 sys 0m0.556s 00:03:59.518 14:28:51 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.518 ************************************ 00:03:59.518 END TEST alias_rpc 00:03:59.518 ************************************ 00:03:59.518 14:28:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.518 14:28:51 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:59.518 14:28:51 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:03:59.518 14:28:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.518 14:28:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.518 14:28:51 -- common/autotest_common.sh@10 -- # set +x 00:03:59.518 ************************************ 00:03:59.518 START TEST spdkcli_tcp 00:03:59.518 ************************************ 00:03:59.518 14:28:51 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:03:59.518 * Looking for test storage... 00:03:59.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:03:59.518 14:28:51 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:59.518 14:28:51 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:03:59.518 14:28:51 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:59.779 14:28:51 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:59.779 14:28:51 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.779 14:28:51 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.779 14:28:51 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.779 14:28:51 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.779 14:28:51 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.779 14:28:51 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.779 14:28:51 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.779 14:28:51 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.779 14:28:51 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.779 14:28:51 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.779 14:28:51 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.779 14:28:51 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.780 14:28:51 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:59.780 14:28:51 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.780 14:28:51 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:59.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.780 --rc genhtml_branch_coverage=1 00:03:59.780 --rc genhtml_function_coverage=1 00:03:59.780 --rc genhtml_legend=1 00:03:59.780 --rc geninfo_all_blocks=1 00:03:59.780 --rc geninfo_unexecuted_blocks=1 00:03:59.780 00:03:59.780 ' 00:03:59.780 14:28:51 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:59.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.780 --rc genhtml_branch_coverage=1 00:03:59.780 --rc genhtml_function_coverage=1 00:03:59.780 --rc genhtml_legend=1 00:03:59.780 --rc geninfo_all_blocks=1 00:03:59.780 --rc geninfo_unexecuted_blocks=1 00:03:59.780 00:03:59.780 ' 00:03:59.780 14:28:51 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:59.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.780 --rc genhtml_branch_coverage=1 00:03:59.780 --rc genhtml_function_coverage=1 00:03:59.780 --rc genhtml_legend=1 00:03:59.780 --rc geninfo_all_blocks=1 00:03:59.780 --rc geninfo_unexecuted_blocks=1 00:03:59.780 00:03:59.780 ' 00:03:59.780 14:28:51 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:59.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.780 --rc genhtml_branch_coverage=1 00:03:59.780 --rc genhtml_function_coverage=1 00:03:59.780 --rc genhtml_legend=1 00:03:59.780 --rc geninfo_all_blocks=1 00:03:59.780 --rc geninfo_unexecuted_blocks=1 00:03:59.780 00:03:59.780 ' 00:03:59.780 14:28:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:03:59.780 14:28:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:03:59.780 14:28:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:03:59.780 14:28:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:59.780 14:28:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:59.780 14:28:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:59.780 14:28:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:59.780 14:28:51 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.780 14:28:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:59.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.780 14:28:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57013 00:03:59.780 14:28:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57013 00:03:59.780 14:28:51 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57013 ']' 00:03:59.780 14:28:51 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.780 14:28:51 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:59.780 14:28:51 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.780 14:28:51 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:59.780 14:28:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:59.780 14:28:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:59.780 [2024-10-01 14:28:51.294558] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:03:59.780 [2024-10-01 14:28:51.294820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57013 ] 00:03:59.780 [2024-10-01 14:28:51.441738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:00.041 [2024-10-01 14:28:51.632731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.041 [2024-10-01 14:28:51.632765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:00.615 14:28:52 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:00.615 14:28:52 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:00.615 14:28:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57030 00:04:00.615 14:28:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:00.615 14:28:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:00.878 [ 00:04:00.878 "bdev_malloc_delete", 00:04:00.878 "bdev_malloc_create", 00:04:00.878 "bdev_null_resize", 00:04:00.878 "bdev_null_delete", 00:04:00.878 "bdev_null_create", 00:04:00.878 "bdev_nvme_cuse_unregister", 00:04:00.878 "bdev_nvme_cuse_register", 00:04:00.878 "bdev_opal_new_user", 00:04:00.878 "bdev_opal_set_lock_state", 00:04:00.878 "bdev_opal_delete", 00:04:00.878 "bdev_opal_get_info", 00:04:00.878 "bdev_opal_create", 00:04:00.878 "bdev_nvme_opal_revert", 00:04:00.878 "bdev_nvme_opal_init", 00:04:00.878 "bdev_nvme_send_cmd", 00:04:00.878 "bdev_nvme_set_keys", 00:04:00.878 "bdev_nvme_get_path_iostat", 00:04:00.878 "bdev_nvme_get_mdns_discovery_info", 00:04:00.878 "bdev_nvme_stop_mdns_discovery", 00:04:00.878 "bdev_nvme_start_mdns_discovery", 00:04:00.878 "bdev_nvme_set_multipath_policy", 00:04:00.878 "bdev_nvme_set_preferred_path", 00:04:00.878 "bdev_nvme_get_io_paths", 00:04:00.878 "bdev_nvme_remove_error_injection", 00:04:00.878 "bdev_nvme_add_error_injection", 00:04:00.878 "bdev_nvme_get_discovery_info", 00:04:00.878 "bdev_nvme_stop_discovery", 00:04:00.878 "bdev_nvme_start_discovery", 00:04:00.878 "bdev_nvme_get_controller_health_info", 00:04:00.878 "bdev_nvme_disable_controller", 00:04:00.878 "bdev_nvme_enable_controller", 00:04:00.878 "bdev_nvme_reset_controller", 00:04:00.878 "bdev_nvme_get_transport_statistics", 00:04:00.878 "bdev_nvme_apply_firmware", 00:04:00.878 "bdev_nvme_detach_controller", 00:04:00.878 "bdev_nvme_get_controllers", 00:04:00.878 "bdev_nvme_attach_controller", 00:04:00.878 "bdev_nvme_set_hotplug", 00:04:00.878 "bdev_nvme_set_options", 00:04:00.878 "bdev_passthru_delete", 00:04:00.878 "bdev_passthru_create", 00:04:00.878 "bdev_lvol_set_parent_bdev", 00:04:00.878 "bdev_lvol_set_parent", 00:04:00.878 "bdev_lvol_check_shallow_copy", 00:04:00.878 "bdev_lvol_start_shallow_copy", 00:04:00.878 "bdev_lvol_grow_lvstore", 00:04:00.878 "bdev_lvol_get_lvols", 00:04:00.878 "bdev_lvol_get_lvstores", 00:04:00.878 "bdev_lvol_delete", 00:04:00.878 "bdev_lvol_set_read_only", 00:04:00.878 "bdev_lvol_resize", 00:04:00.878 "bdev_lvol_decouple_parent", 00:04:00.878 "bdev_lvol_inflate", 00:04:00.878 "bdev_lvol_rename", 00:04:00.878 "bdev_lvol_clone_bdev", 00:04:00.878 "bdev_lvol_clone", 00:04:00.878 "bdev_lvol_snapshot", 00:04:00.878 "bdev_lvol_create", 00:04:00.878 "bdev_lvol_delete_lvstore", 00:04:00.878 "bdev_lvol_rename_lvstore", 00:04:00.878 "bdev_lvol_create_lvstore", 00:04:00.878 "bdev_raid_set_options", 00:04:00.878 "bdev_raid_remove_base_bdev", 00:04:00.878 "bdev_raid_add_base_bdev", 00:04:00.878 "bdev_raid_delete", 00:04:00.878 "bdev_raid_create", 00:04:00.878 "bdev_raid_get_bdevs", 00:04:00.878 "bdev_error_inject_error", 00:04:00.878 "bdev_error_delete", 00:04:00.878 "bdev_error_create", 00:04:00.878 "bdev_split_delete", 00:04:00.878 "bdev_split_create", 00:04:00.878 "bdev_delay_delete", 00:04:00.878 "bdev_delay_create", 00:04:00.878 "bdev_delay_update_latency", 00:04:00.878 "bdev_zone_block_delete", 00:04:00.878 "bdev_zone_block_create", 00:04:00.878 "blobfs_create", 00:04:00.878 "blobfs_detect", 00:04:00.878 "blobfs_set_cache_size", 00:04:00.878 "bdev_aio_delete", 00:04:00.878 "bdev_aio_rescan", 00:04:00.878 "bdev_aio_create", 00:04:00.878 "bdev_ftl_set_property", 00:04:00.878 "bdev_ftl_get_properties", 00:04:00.878 "bdev_ftl_get_stats", 00:04:00.878 "bdev_ftl_unmap", 00:04:00.878 "bdev_ftl_unload", 00:04:00.878 "bdev_ftl_delete", 00:04:00.878 "bdev_ftl_load", 00:04:00.878 "bdev_ftl_create", 00:04:00.878 "bdev_virtio_attach_controller", 00:04:00.878 "bdev_virtio_scsi_get_devices", 00:04:00.878 "bdev_virtio_detach_controller", 00:04:00.878 "bdev_virtio_blk_set_hotplug", 00:04:00.878 "bdev_iscsi_delete", 00:04:00.878 "bdev_iscsi_create", 00:04:00.878 "bdev_iscsi_set_options", 00:04:00.878 "accel_error_inject_error", 00:04:00.878 "ioat_scan_accel_module", 00:04:00.878 "dsa_scan_accel_module", 00:04:00.878 "iaa_scan_accel_module", 00:04:00.878 "keyring_file_remove_key", 00:04:00.878 "keyring_file_add_key", 00:04:00.878 "keyring_linux_set_options", 00:04:00.878 "fsdev_aio_delete", 00:04:00.878 "fsdev_aio_create", 00:04:00.878 "iscsi_get_histogram", 00:04:00.878 "iscsi_enable_histogram", 00:04:00.878 "iscsi_set_options", 00:04:00.878 "iscsi_get_auth_groups", 00:04:00.878 "iscsi_auth_group_remove_secret", 00:04:00.878 "iscsi_auth_group_add_secret", 00:04:00.878 "iscsi_delete_auth_group", 00:04:00.878 "iscsi_create_auth_group", 00:04:00.878 "iscsi_set_discovery_auth", 00:04:00.878 "iscsi_get_options", 00:04:00.878 "iscsi_target_node_request_logout", 00:04:00.878 "iscsi_target_node_set_redirect", 00:04:00.878 "iscsi_target_node_set_auth", 00:04:00.878 "iscsi_target_node_add_lun", 00:04:00.878 "iscsi_get_stats", 00:04:00.878 "iscsi_get_connections", 00:04:00.878 "iscsi_portal_group_set_auth", 00:04:00.878 "iscsi_start_portal_group", 00:04:00.878 "iscsi_delete_portal_group", 00:04:00.878 "iscsi_create_portal_group", 00:04:00.878 "iscsi_get_portal_groups", 00:04:00.878 "iscsi_delete_target_node", 00:04:00.878 "iscsi_target_node_remove_pg_ig_maps", 00:04:00.878 "iscsi_target_node_add_pg_ig_maps", 00:04:00.878 "iscsi_create_target_node", 00:04:00.878 "iscsi_get_target_nodes", 00:04:00.878 "iscsi_delete_initiator_group", 00:04:00.878 "iscsi_initiator_group_remove_initiators", 00:04:00.878 "iscsi_initiator_group_add_initiators", 00:04:00.878 "iscsi_create_initiator_group", 00:04:00.878 "iscsi_get_initiator_groups", 00:04:00.878 "nvmf_set_crdt", 00:04:00.878 "nvmf_set_config", 00:04:00.878 "nvmf_set_max_subsystems", 00:04:00.878 "nvmf_stop_mdns_prr", 00:04:00.878 "nvmf_publish_mdns_prr", 00:04:00.878 "nvmf_subsystem_get_listeners", 00:04:00.878 "nvmf_subsystem_get_qpairs", 00:04:00.878 "nvmf_subsystem_get_controllers", 00:04:00.878 "nvmf_get_stats", 00:04:00.878 "nvmf_get_transports", 00:04:00.878 "nvmf_create_transport", 00:04:00.878 "nvmf_get_targets", 00:04:00.878 "nvmf_delete_target", 00:04:00.878 "nvmf_create_target", 00:04:00.878 "nvmf_subsystem_allow_any_host", 00:04:00.878 "nvmf_subsystem_set_keys", 00:04:00.878 "nvmf_subsystem_remove_host", 00:04:00.878 "nvmf_subsystem_add_host", 00:04:00.878 "nvmf_ns_remove_host", 00:04:00.878 "nvmf_ns_add_host", 00:04:00.878 "nvmf_subsystem_remove_ns", 00:04:00.878 "nvmf_subsystem_set_ns_ana_group", 00:04:00.878 "nvmf_subsystem_add_ns", 00:04:00.878 "nvmf_subsystem_listener_set_ana_state", 00:04:00.878 "nvmf_discovery_get_referrals", 00:04:00.878 "nvmf_discovery_remove_referral", 00:04:00.878 "nvmf_discovery_add_referral", 00:04:00.878 "nvmf_subsystem_remove_listener", 00:04:00.878 "nvmf_subsystem_add_listener", 00:04:00.878 "nvmf_delete_subsystem", 00:04:00.878 "nvmf_create_subsystem", 00:04:00.878 "nvmf_get_subsystems", 00:04:00.878 "env_dpdk_get_mem_stats", 00:04:00.878 "nbd_get_disks", 00:04:00.878 "nbd_stop_disk", 00:04:00.878 "nbd_start_disk", 00:04:00.878 "ublk_recover_disk", 00:04:00.878 "ublk_get_disks", 00:04:00.878 "ublk_stop_disk", 00:04:00.878 "ublk_start_disk", 00:04:00.878 "ublk_destroy_target", 00:04:00.878 "ublk_create_target", 00:04:00.879 "virtio_blk_create_transport", 00:04:00.879 "virtio_blk_get_transports", 00:04:00.879 "vhost_controller_set_coalescing", 00:04:00.879 "vhost_get_controllers", 00:04:00.879 "vhost_delete_controller", 00:04:00.879 "vhost_create_blk_controller", 00:04:00.879 "vhost_scsi_controller_remove_target", 00:04:00.879 "vhost_scsi_controller_add_target", 00:04:00.879 "vhost_start_scsi_controller", 00:04:00.879 "vhost_create_scsi_controller", 00:04:00.879 "thread_set_cpumask", 00:04:00.879 "scheduler_set_options", 00:04:00.879 "framework_get_governor", 00:04:00.879 "framework_get_scheduler", 00:04:00.879 "framework_set_scheduler", 00:04:00.879 "framework_get_reactors", 00:04:00.879 "thread_get_io_channels", 00:04:00.879 "thread_get_pollers", 00:04:00.879 "thread_get_stats", 00:04:00.879 "framework_monitor_context_switch", 00:04:00.879 "spdk_kill_instance", 00:04:00.879 "log_enable_timestamps", 00:04:00.879 "log_get_flags", 00:04:00.879 "log_clear_flag", 00:04:00.879 "log_set_flag", 00:04:00.879 "log_get_level", 00:04:00.879 "log_set_level", 00:04:00.879 "log_get_print_level", 00:04:00.879 "log_set_print_level", 00:04:00.879 "framework_enable_cpumask_locks", 00:04:00.879 "framework_disable_cpumask_locks", 00:04:00.879 "framework_wait_init", 00:04:00.879 "framework_start_init", 00:04:00.879 "scsi_get_devices", 00:04:00.879 "bdev_get_histogram", 00:04:00.879 "bdev_enable_histogram", 00:04:00.879 "bdev_set_qos_limit", 00:04:00.879 "bdev_set_qd_sampling_period", 00:04:00.879 "bdev_get_bdevs", 00:04:00.879 "bdev_reset_iostat", 00:04:00.879 "bdev_get_iostat", 00:04:00.879 "bdev_examine", 00:04:00.879 "bdev_wait_for_examine", 00:04:00.879 "bdev_set_options", 00:04:00.879 "accel_get_stats", 00:04:00.879 "accel_set_options", 00:04:00.879 "accel_set_driver", 00:04:00.879 "accel_crypto_key_destroy", 00:04:00.879 "accel_crypto_keys_get", 00:04:00.879 "accel_crypto_key_create", 00:04:00.879 "accel_assign_opc", 00:04:00.879 "accel_get_module_info", 00:04:00.879 "accel_get_opc_assignments", 00:04:00.879 "vmd_rescan", 00:04:00.879 "vmd_remove_device", 00:04:00.879 "vmd_enable", 00:04:00.879 "sock_get_default_impl", 00:04:00.879 "sock_set_default_impl", 00:04:00.879 "sock_impl_set_options", 00:04:00.879 "sock_impl_get_options", 00:04:00.879 "iobuf_get_stats", 00:04:00.879 "iobuf_set_options", 00:04:00.879 "keyring_get_keys", 00:04:00.879 "framework_get_pci_devices", 00:04:00.879 "framework_get_config", 00:04:00.879 "framework_get_subsystems", 00:04:00.879 "fsdev_set_opts", 00:04:00.879 "fsdev_get_opts", 00:04:00.879 "trace_get_info", 00:04:00.879 "trace_get_tpoint_group_mask", 00:04:00.879 "trace_disable_tpoint_group", 00:04:00.879 "trace_enable_tpoint_group", 00:04:00.879 "trace_clear_tpoint_mask", 00:04:00.879 "trace_set_tpoint_mask", 00:04:00.879 "notify_get_notifications", 00:04:00.879 "notify_get_types", 00:04:00.879 "spdk_get_version", 00:04:00.879 "rpc_get_methods" 00:04:00.879 ] 00:04:00.879 14:28:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:00.879 14:28:52 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:00.879 14:28:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:00.879 14:28:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:00.879 14:28:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57013 00:04:00.879 14:28:52 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57013 ']' 00:04:00.879 14:28:52 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57013 00:04:00.879 14:28:52 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:00.879 14:28:52 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:00.879 14:28:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57013 00:04:00.879 killing process with pid 57013 00:04:00.879 14:28:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:00.879 14:28:52 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:00.879 14:28:52 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57013' 00:04:00.879 14:28:52 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57013 00:04:00.879 14:28:52 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57013 00:04:02.818 ************************************ 00:04:02.818 END TEST spdkcli_tcp 00:04:02.818 ************************************ 00:04:02.818 00:04:02.818 real 0m3.150s 00:04:02.818 user 0m5.572s 00:04:02.818 sys 0m0.457s 00:04:02.818 14:28:54 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.818 14:28:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:02.818 14:28:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.818 14:28:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.818 14:28:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.818 14:28:54 -- common/autotest_common.sh@10 -- # set +x 00:04:02.818 ************************************ 00:04:02.818 START TEST dpdk_mem_utility 00:04:02.818 ************************************ 00:04:02.818 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.818 * Looking for test storage... 00:04:02.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:02.818 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:02.818 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:02.818 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:02.818 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:02.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.818 14:28:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:02.818 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.818 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:02.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.819 --rc genhtml_branch_coverage=1 00:04:02.819 --rc genhtml_function_coverage=1 00:04:02.819 --rc genhtml_legend=1 00:04:02.819 --rc geninfo_all_blocks=1 00:04:02.819 --rc geninfo_unexecuted_blocks=1 00:04:02.819 00:04:02.819 ' 00:04:02.819 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:02.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.819 --rc genhtml_branch_coverage=1 00:04:02.819 --rc genhtml_function_coverage=1 00:04:02.819 --rc genhtml_legend=1 00:04:02.819 --rc geninfo_all_blocks=1 00:04:02.819 --rc geninfo_unexecuted_blocks=1 00:04:02.819 00:04:02.819 ' 00:04:02.819 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:02.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.819 --rc genhtml_branch_coverage=1 00:04:02.819 --rc genhtml_function_coverage=1 00:04:02.819 --rc genhtml_legend=1 00:04:02.819 --rc geninfo_all_blocks=1 00:04:02.819 --rc geninfo_unexecuted_blocks=1 00:04:02.819 00:04:02.819 ' 00:04:02.819 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:02.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.819 --rc genhtml_branch_coverage=1 00:04:02.819 --rc genhtml_function_coverage=1 00:04:02.819 --rc genhtml_legend=1 00:04:02.819 --rc geninfo_all_blocks=1 00:04:02.819 --rc geninfo_unexecuted_blocks=1 00:04:02.819 00:04:02.819 ' 00:04:02.819 14:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:02.819 14:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57130 00:04:02.819 14:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57130 00:04:02.819 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57130 ']' 00:04:02.819 14:28:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:02.819 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.819 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:02.819 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.819 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:02.819 14:28:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.081 [2024-10-01 14:28:54.508574] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:03.081 [2024-10-01 14:28:54.508896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57130 ] 00:04:03.081 [2024-10-01 14:28:54.657588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.342 [2024-10-01 14:28:54.845873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.916 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:03.916 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:03.916 14:28:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:03.916 14:28:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:03.916 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.916 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.916 { 00:04:03.916 "filename": "/tmp/spdk_mem_dump.txt" 00:04:03.916 } 00:04:03.916 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.916 14:28:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:03.916 DPDK memory size 866.000000 MiB in 1 heap(s) 00:04:03.916 1 heaps totaling size 866.000000 MiB 00:04:03.916 size: 866.000000 MiB heap id: 0 00:04:03.916 end heaps---------- 00:04:03.916 9 mempools totaling size 642.649841 MiB 00:04:03.916 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:03.916 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:03.916 size: 92.545471 MiB name: bdev_io_57130 00:04:03.916 size: 51.011292 MiB name: evtpool_57130 00:04:03.916 size: 50.003479 MiB name: msgpool_57130 00:04:03.916 size: 36.509338 MiB name: fsdev_io_57130 00:04:03.916 size: 21.763794 MiB name: PDU_Pool 00:04:03.916 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:03.916 size: 0.026123 MiB name: Session_Pool 00:04:03.916 end mempools------- 00:04:03.916 6 memzones totaling size 4.142822 MiB 00:04:03.916 size: 1.000366 MiB name: RG_ring_0_57130 00:04:03.916 size: 1.000366 MiB name: RG_ring_1_57130 00:04:03.916 size: 1.000366 MiB name: RG_ring_4_57130 00:04:03.916 size: 1.000366 MiB name: RG_ring_5_57130 00:04:03.916 size: 0.125366 MiB name: RG_ring_2_57130 00:04:03.916 size: 0.015991 MiB name: RG_ring_3_57130 00:04:03.916 end memzones------- 00:04:03.916 14:28:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:03.916 heap id: 0 total size: 866.000000 MiB number of busy elements: 314 number of free elements: 19 00:04:03.916 list of free elements. size: 19.913818 MiB 00:04:03.916 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:03.916 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:03.916 element at address: 0x200009600000 with size: 1.995972 MiB 00:04:03.916 element at address: 0x20000d800000 with size: 1.995972 MiB 00:04:03.916 element at address: 0x200007000000 with size: 1.991028 MiB 00:04:03.916 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:04:03.916 element at address: 0x20001c300040 with size: 0.999939 MiB 00:04:03.917 element at address: 0x20001c400000 with size: 0.999084 MiB 00:04:03.917 element at address: 0x200035000000 with size: 0.994324 MiB 00:04:03.917 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:04:03.917 element at address: 0x20001c700040 with size: 0.936401 MiB 00:04:03.917 element at address: 0x200000200000 with size: 0.832153 MiB 00:04:03.917 element at address: 0x20001de00000 with size: 0.560974 MiB 00:04:03.917 element at address: 0x200003e00000 with size: 0.490662 MiB 00:04:03.917 element at address: 0x20001c000000 with size: 0.488220 MiB 00:04:03.917 element at address: 0x20001c800000 with size: 0.485413 MiB 00:04:03.917 element at address: 0x200015e00000 with size: 0.443237 MiB 00:04:03.917 element at address: 0x20002b200000 with size: 0.391663 MiB 00:04:03.917 element at address: 0x200003a00000 with size: 0.352844 MiB 00:04:03.917 list of standard malloc elements. size: 199.287476 MiB 00:04:03.917 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:04:03.917 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:04:03.917 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:04:03.917 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:04:03.917 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:04:03.917 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:03.917 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:04:03.917 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:03.917 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:04:03.917 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:04:03.917 element at address: 0x200015dff040 with size: 0.000305 MiB 00:04:03.917 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003aff700 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015dff180 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015dff280 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015dff380 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015dff480 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015dff580 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015dff680 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015dff780 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015dff880 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015dff980 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015e71780 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015e71880 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015e71980 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015e72080 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015e72180 with size: 0.000244 MiB 00:04:03.917 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c07cfc0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c07d0c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c07d1c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:04:03.917 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de8f9c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de8fac0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de8fbc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de8fcc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de8fdc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b264440 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b264540 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26b200 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:04:03.918 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:04:03.919 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:04:03.919 list of memzone associated elements. size: 646.798706 MiB 00:04:03.919 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:04:03.919 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:03.919 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:04:03.919 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:03.919 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:04:03.919 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57130_0 00:04:03.919 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:03.919 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57130_0 00:04:03.919 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:03.919 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57130_0 00:04:03.919 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:04:03.919 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57130_0 00:04:03.919 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:04:03.919 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:03.919 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:04:03.919 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:03.919 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:03.919 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57130 00:04:03.919 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:03.919 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57130 00:04:03.919 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:03.919 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57130 00:04:03.919 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:04:03.919 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:03.919 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:04:03.919 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:03.919 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:04:03.919 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:03.919 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:04:03.919 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:03.919 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:03.919 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57130 00:04:03.919 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:03.919 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57130 00:04:03.919 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:04:03.919 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57130 00:04:03.919 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:04:03.919 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57130 00:04:03.919 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:04:03.919 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57130 00:04:03.919 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:04:03.919 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57130 00:04:03.919 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:04:03.919 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:03.919 element at address: 0x200015e72280 with size: 0.500549 MiB 00:04:03.919 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:03.919 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:04:03.919 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:03.919 element at address: 0x200003a5e780 with size: 0.125549 MiB 00:04:03.919 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57130 00:04:03.919 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:04:03.919 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:03.919 element at address: 0x20002b264640 with size: 0.023804 MiB 00:04:03.919 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:03.919 element at address: 0x200003a5a540 with size: 0.016174 MiB 00:04:03.919 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57130 00:04:03.919 element at address: 0x20002b26a7c0 with size: 0.002502 MiB 00:04:03.919 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:03.919 element at address: 0x2000002d6180 with size: 0.000366 MiB 00:04:03.919 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57130 00:04:03.919 element at address: 0x200003aff800 with size: 0.000366 MiB 00:04:03.919 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57130 00:04:03.919 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:04:03.919 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57130 00:04:03.919 element at address: 0x20002b26b300 with size: 0.000366 MiB 00:04:03.919 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:03.919 14:28:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:03.919 14:28:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57130 00:04:03.919 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57130 ']' 00:04:03.919 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57130 00:04:03.919 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:03.919 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:03.919 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57130 00:04:03.919 killing process with pid 57130 00:04:03.919 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:03.919 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:03.919 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57130' 00:04:03.919 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57130 00:04:03.919 14:28:55 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57130 00:04:05.835 ************************************ 00:04:05.835 END TEST dpdk_mem_utility 00:04:05.835 ************************************ 00:04:05.835 00:04:05.835 real 0m2.908s 00:04:05.835 user 0m2.922s 00:04:05.835 sys 0m0.401s 00:04:05.835 14:28:57 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.835 14:28:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:05.835 14:28:57 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:05.835 14:28:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.835 14:28:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.835 14:28:57 -- common/autotest_common.sh@10 -- # set +x 00:04:05.835 ************************************ 00:04:05.835 START TEST event 00:04:05.835 ************************************ 00:04:05.835 14:28:57 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:05.835 * Looking for test storage... 00:04:05.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:05.835 14:28:57 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:05.835 14:28:57 event -- common/autotest_common.sh@1681 -- # lcov --version 00:04:05.835 14:28:57 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:05.835 14:28:57 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:05.835 14:28:57 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.835 14:28:57 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.835 14:28:57 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.835 14:28:57 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.835 14:28:57 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.835 14:28:57 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.835 14:28:57 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.835 14:28:57 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.835 14:28:57 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.835 14:28:57 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.835 14:28:57 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.835 14:28:57 event -- scripts/common.sh@344 -- # case "$op" in 00:04:05.835 14:28:57 event -- scripts/common.sh@345 -- # : 1 00:04:05.835 14:28:57 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.835 14:28:57 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.835 14:28:57 event -- scripts/common.sh@365 -- # decimal 1 00:04:05.835 14:28:57 event -- scripts/common.sh@353 -- # local d=1 00:04:05.835 14:28:57 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.835 14:28:57 event -- scripts/common.sh@355 -- # echo 1 00:04:05.836 14:28:57 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.836 14:28:57 event -- scripts/common.sh@366 -- # decimal 2 00:04:05.836 14:28:57 event -- scripts/common.sh@353 -- # local d=2 00:04:05.836 14:28:57 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.836 14:28:57 event -- scripts/common.sh@355 -- # echo 2 00:04:05.836 14:28:57 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.836 14:28:57 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.836 14:28:57 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.836 14:28:57 event -- scripts/common.sh@368 -- # return 0 00:04:05.836 14:28:57 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.836 14:28:57 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:05.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.836 --rc genhtml_branch_coverage=1 00:04:05.836 --rc genhtml_function_coverage=1 00:04:05.836 --rc genhtml_legend=1 00:04:05.836 --rc geninfo_all_blocks=1 00:04:05.836 --rc geninfo_unexecuted_blocks=1 00:04:05.836 00:04:05.836 ' 00:04:05.836 14:28:57 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:05.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.836 --rc genhtml_branch_coverage=1 00:04:05.836 --rc genhtml_function_coverage=1 00:04:05.836 --rc genhtml_legend=1 00:04:05.836 --rc geninfo_all_blocks=1 00:04:05.836 --rc geninfo_unexecuted_blocks=1 00:04:05.836 00:04:05.836 ' 00:04:05.836 14:28:57 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:05.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.836 --rc genhtml_branch_coverage=1 00:04:05.836 --rc genhtml_function_coverage=1 00:04:05.836 --rc genhtml_legend=1 00:04:05.836 --rc geninfo_all_blocks=1 00:04:05.836 --rc geninfo_unexecuted_blocks=1 00:04:05.836 00:04:05.836 ' 00:04:05.836 14:28:57 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:05.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.836 --rc genhtml_branch_coverage=1 00:04:05.836 --rc genhtml_function_coverage=1 00:04:05.836 --rc genhtml_legend=1 00:04:05.836 --rc geninfo_all_blocks=1 00:04:05.836 --rc geninfo_unexecuted_blocks=1 00:04:05.836 00:04:05.836 ' 00:04:05.836 14:28:57 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:05.836 14:28:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:05.836 14:28:57 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:05.836 14:28:57 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:05.836 14:28:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.836 14:28:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:05.836 ************************************ 00:04:05.836 START TEST event_perf 00:04:05.836 ************************************ 00:04:05.836 14:28:57 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:05.836 Running I/O for 1 seconds...[2024-10-01 14:28:57.441486] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:05.836 [2024-10-01 14:28:57.441593] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57227 ] 00:04:06.095 [2024-10-01 14:28:57.593956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:06.356 [2024-10-01 14:28:57.785810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.356 [2024-10-01 14:28:57.786253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:06.356 Running I/O for 1 seconds...[2024-10-01 14:28:57.786696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.356 [2024-10-01 14:28:57.787027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:07.765 00:04:07.765 lcore 0: 192107 00:04:07.765 lcore 1: 192108 00:04:07.765 lcore 2: 192110 00:04:07.765 lcore 3: 192107 00:04:07.765 done. 00:04:07.765 00:04:07.765 real 0m1.648s 00:04:07.765 user 0m4.428s 00:04:07.765 sys 0m0.095s 00:04:07.765 14:28:59 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.765 ************************************ 00:04:07.765 END TEST event_perf 00:04:07.765 ************************************ 00:04:07.765 14:28:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:07.765 14:28:59 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:07.765 14:28:59 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:07.765 14:28:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.765 14:28:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.765 ************************************ 00:04:07.765 START TEST event_reactor 00:04:07.765 ************************************ 00:04:07.765 14:28:59 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:07.765 [2024-10-01 14:28:59.159906] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:07.765 [2024-10-01 14:28:59.160007] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57261 ] 00:04:07.765 [2024-10-01 14:28:59.304385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.026 [2024-10-01 14:28:59.491208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.408 test_start 00:04:09.408 oneshot 00:04:09.408 tick 100 00:04:09.408 tick 100 00:04:09.408 tick 250 00:04:09.408 tick 100 00:04:09.408 tick 100 00:04:09.408 tick 100 00:04:09.408 tick 250 00:04:09.408 tick 500 00:04:09.408 tick 100 00:04:09.408 tick 100 00:04:09.408 tick 250 00:04:09.408 tick 100 00:04:09.408 tick 100 00:04:09.408 test_end 00:04:09.408 00:04:09.408 real 0m1.628s 00:04:09.408 user 0m1.443s 00:04:09.408 sys 0m0.076s 00:04:09.408 14:29:00 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.408 14:29:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:09.408 ************************************ 00:04:09.408 END TEST event_reactor 00:04:09.408 ************************************ 00:04:09.408 14:29:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.408 14:29:00 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:09.408 14:29:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.408 14:29:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.408 ************************************ 00:04:09.408 START TEST event_reactor_perf 00:04:09.408 ************************************ 00:04:09.408 14:29:00 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.408 [2024-10-01 14:29:00.863872] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:09.408 [2024-10-01 14:29:00.863986] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57303 ] 00:04:09.408 [2024-10-01 14:29:01.012596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.669 [2024-10-01 14:29:01.211617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.054 test_start 00:04:11.054 test_end 00:04:11.054 Performance: 313721 events per second 00:04:11.054 00:04:11.054 real 0m1.647s 00:04:11.054 user 0m1.459s 00:04:11.054 sys 0m0.078s 00:04:11.054 14:29:02 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:11.054 14:29:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:11.054 ************************************ 00:04:11.054 END TEST event_reactor_perf 00:04:11.054 ************************************ 00:04:11.054 14:29:02 event -- event/event.sh@49 -- # uname -s 00:04:11.054 14:29:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:11.054 14:29:02 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:11.054 14:29:02 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.054 14:29:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.054 14:29:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.054 ************************************ 00:04:11.054 START TEST event_scheduler 00:04:11.054 ************************************ 00:04:11.054 14:29:02 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:11.054 * Looking for test storage... 00:04:11.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:11.054 14:29:02 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:11.054 14:29:02 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:04:11.054 14:29:02 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:11.054 14:29:02 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.054 14:29:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:11.055 14:29:02 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.055 14:29:02 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:11.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.055 --rc genhtml_branch_coverage=1 00:04:11.055 --rc genhtml_function_coverage=1 00:04:11.055 --rc genhtml_legend=1 00:04:11.055 --rc geninfo_all_blocks=1 00:04:11.055 --rc geninfo_unexecuted_blocks=1 00:04:11.055 00:04:11.055 ' 00:04:11.055 14:29:02 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:11.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.055 --rc genhtml_branch_coverage=1 00:04:11.055 --rc genhtml_function_coverage=1 00:04:11.055 --rc genhtml_legend=1 00:04:11.055 --rc geninfo_all_blocks=1 00:04:11.055 --rc geninfo_unexecuted_blocks=1 00:04:11.055 00:04:11.055 ' 00:04:11.055 14:29:02 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:11.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.055 --rc genhtml_branch_coverage=1 00:04:11.055 --rc genhtml_function_coverage=1 00:04:11.055 --rc genhtml_legend=1 00:04:11.055 --rc geninfo_all_blocks=1 00:04:11.055 --rc geninfo_unexecuted_blocks=1 00:04:11.055 00:04:11.055 ' 00:04:11.055 14:29:02 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:11.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.055 --rc genhtml_branch_coverage=1 00:04:11.055 --rc genhtml_function_coverage=1 00:04:11.055 --rc genhtml_legend=1 00:04:11.055 --rc geninfo_all_blocks=1 00:04:11.055 --rc geninfo_unexecuted_blocks=1 00:04:11.055 00:04:11.055 ' 00:04:11.055 14:29:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:11.055 14:29:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57379 00:04:11.055 14:29:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.055 14:29:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57379 00:04:11.055 14:29:02 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 57379 ']' 00:04:11.055 14:29:02 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.055 14:29:02 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:11.055 14:29:02 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.055 14:29:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:11.055 14:29:02 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:11.055 14:29:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.316 [2024-10-01 14:29:02.753932] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:11.316 [2024-10-01 14:29:02.754062] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57379 ] 00:04:11.316 [2024-10-01 14:29:02.902071] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:11.576 [2024-10-01 14:29:03.095059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.576 [2024-10-01 14:29:03.095364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.576 [2024-10-01 14:29:03.096088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.576 [2024-10-01 14:29:03.096311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:12.171 14:29:03 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:12.171 14:29:03 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:12.171 14:29:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:12.171 14:29:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.171 14:29:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:12.171 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:12.171 POWER: Cannot set governor of lcore 0 to userspace 00:04:12.171 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:12.171 POWER: Cannot set governor of lcore 0 to performance 00:04:12.171 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:12.171 POWER: Cannot set governor of lcore 0 to userspace 00:04:12.171 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:12.171 POWER: Cannot set governor of lcore 0 to userspace 00:04:12.171 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:12.171 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:12.171 POWER: Unable to set Power Management Environment for lcore 0 00:04:12.171 [2024-10-01 14:29:03.622149] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:12.171 [2024-10-01 14:29:03.622168] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:12.171 [2024-10-01 14:29:03.622178] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:12.171 [2024-10-01 14:29:03.622198] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:12.171 [2024-10-01 14:29:03.622206] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:12.171 [2024-10-01 14:29:03.622215] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:12.171 14:29:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.171 14:29:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:12.171 14:29:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.171 14:29:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:12.171 [2024-10-01 14:29:03.843111] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:12.171 14:29:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.171 14:29:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:12.171 14:29:03 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.171 14:29:03 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.171 14:29:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:12.432 ************************************ 00:04:12.432 START TEST scheduler_create_thread 00:04:12.432 ************************************ 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.432 2 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.432 3 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.432 4 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.432 5 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.432 6 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.432 7 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.432 8 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.432 9 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.432 10 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.432 14:29:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.371 14:29:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.371 14:29:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:13.371 14:29:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.371 14:29:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.752 14:29:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.752 14:29:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:14.752 14:29:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:14.752 14:29:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.752 14:29:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.695 14:29:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.695 00:04:15.695 real 0m3.375s 00:04:15.695 user 0m0.015s 00:04:15.695 sys 0m0.008s 00:04:15.695 14:29:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.695 ************************************ 00:04:15.695 END TEST scheduler_create_thread 00:04:15.695 ************************************ 00:04:15.695 14:29:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.695 14:29:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:15.695 14:29:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57379 00:04:15.695 14:29:07 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 57379 ']' 00:04:15.695 14:29:07 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 57379 00:04:15.695 14:29:07 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:15.695 14:29:07 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:15.695 14:29:07 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57379 00:04:15.695 killing process with pid 57379 00:04:15.695 14:29:07 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:15.695 14:29:07 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:15.695 14:29:07 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57379' 00:04:15.695 14:29:07 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 57379 00:04:15.695 14:29:07 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 57379 00:04:15.955 [2024-10-01 14:29:07.614960] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:16.898 00:04:16.898 real 0m5.959s 00:04:16.898 user 0m11.661s 00:04:16.898 sys 0m0.348s 00:04:16.898 ************************************ 00:04:16.898 END TEST event_scheduler 00:04:16.898 ************************************ 00:04:16.898 14:29:08 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.898 14:29:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:16.898 14:29:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:16.898 14:29:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:16.898 14:29:08 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.898 14:29:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.898 14:29:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:16.898 ************************************ 00:04:16.898 START TEST app_repeat 00:04:16.898 ************************************ 00:04:16.898 14:29:08 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:16.898 14:29:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.898 14:29:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.898 14:29:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:16.898 14:29:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.898 14:29:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:16.898 14:29:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:16.898 14:29:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:16.898 14:29:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57490 00:04:16.898 14:29:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.898 14:29:08 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:17.160 Process app_repeat pid: 57490 00:04:17.160 spdk_app_start Round 0 00:04:17.160 14:29:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57490' 00:04:17.160 14:29:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:17.160 14:29:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:17.160 14:29:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57490 /var/tmp/spdk-nbd.sock 00:04:17.160 14:29:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 57490 ']' 00:04:17.160 14:29:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:17.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:17.160 14:29:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:17.160 14:29:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:17.160 14:29:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:17.160 14:29:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:17.160 [2024-10-01 14:29:08.616253] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:17.160 [2024-10-01 14:29:08.616364] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57490 ] 00:04:17.160 [2024-10-01 14:29:08.765956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.420 [2024-10-01 14:29:08.962216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.420 [2024-10-01 14:29:08.962371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.993 14:29:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:17.993 14:29:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:17.993 14:29:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.253 Malloc0 00:04:18.253 14:29:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.514 Malloc1 00:04:18.514 14:29:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.514 14:29:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:18.775 /dev/nbd0 00:04:18.775 14:29:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:18.775 14:29:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.775 1+0 records in 00:04:18.775 1+0 records out 00:04:18.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547608 s, 7.5 MB/s 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:18.775 14:29:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:18.775 14:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.775 14:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.775 14:29:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:18.775 /dev/nbd1 00:04:19.085 14:29:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:19.085 14:29:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:19.085 14:29:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:19.085 14:29:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:19.085 14:29:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:19.085 14:29:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:19.085 14:29:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:19.085 14:29:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:19.085 14:29:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:19.085 14:29:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:19.085 14:29:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.085 1+0 records in 00:04:19.085 1+0 records out 00:04:19.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572629 s, 7.2 MB/s 00:04:19.086 14:29:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:19.086 14:29:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:19.086 14:29:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:19.086 14:29:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:19.086 14:29:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:19.086 { 00:04:19.086 "nbd_device": "/dev/nbd0", 00:04:19.086 "bdev_name": "Malloc0" 00:04:19.086 }, 00:04:19.086 { 00:04:19.086 "nbd_device": "/dev/nbd1", 00:04:19.086 "bdev_name": "Malloc1" 00:04:19.086 } 00:04:19.086 ]' 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:19.086 { 00:04:19.086 "nbd_device": "/dev/nbd0", 00:04:19.086 "bdev_name": "Malloc0" 00:04:19.086 }, 00:04:19.086 { 00:04:19.086 "nbd_device": "/dev/nbd1", 00:04:19.086 "bdev_name": "Malloc1" 00:04:19.086 } 00:04:19.086 ]' 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:19.086 /dev/nbd1' 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:19.086 /dev/nbd1' 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:19.086 256+0 records in 00:04:19.086 256+0 records out 00:04:19.086 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00735419 s, 143 MB/s 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.086 14:29:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:19.347 256+0 records in 00:04:19.347 256+0 records out 00:04:19.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0391282 s, 26.8 MB/s 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:19.347 256+0 records in 00:04:19.347 256+0 records out 00:04:19.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213305 s, 49.2 MB/s 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.347 14:29:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.348 14:29:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:19.348 14:29:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:19.348 14:29:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.348 14:29:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.609 14:29:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.869 14:29:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:19.869 14:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:19.869 14:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.870 14:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:19.870 14:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:19.870 14:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.870 14:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:19.870 14:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:19.870 14:29:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:19.870 14:29:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:19.870 14:29:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:19.870 14:29:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:19.870 14:29:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:20.442 14:29:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:21.384 [2024-10-01 14:29:12.753250] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.384 [2024-10-01 14:29:12.976270] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.384 [2024-10-01 14:29:12.976414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.645 [2024-10-01 14:29:13.120541] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:21.645 [2024-10-01 14:29:13.120625] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:23.563 14:29:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:23.563 14:29:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:23.563 spdk_app_start Round 1 00:04:23.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:23.563 14:29:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57490 /var/tmp/spdk-nbd.sock 00:04:23.563 14:29:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 57490 ']' 00:04:23.563 14:29:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:23.563 14:29:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.563 14:29:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:23.563 14:29:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.563 14:29:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:23.563 14:29:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.563 14:29:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:23.563 14:29:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.931 Malloc0 00:04:23.931 14:29:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.931 Malloc1 00:04:23.931 14:29:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.931 14:29:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:24.193 /dev/nbd0 00:04:24.194 14:29:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:24.194 14:29:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.194 1+0 records in 00:04:24.194 1+0 records out 00:04:24.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333728 s, 12.3 MB/s 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:24.194 14:29:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:24.194 14:29:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.194 14:29:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.194 14:29:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:24.455 /dev/nbd1 00:04:24.455 14:29:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:24.455 14:29:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.455 1+0 records in 00:04:24.455 1+0 records out 00:04:24.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237844 s, 17.2 MB/s 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:24.455 14:29:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:24.455 14:29:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.455 14:29:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.455 14:29:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.455 14:29:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.455 14:29:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:24.716 { 00:04:24.716 "nbd_device": "/dev/nbd0", 00:04:24.716 "bdev_name": "Malloc0" 00:04:24.716 }, 00:04:24.716 { 00:04:24.716 "nbd_device": "/dev/nbd1", 00:04:24.716 "bdev_name": "Malloc1" 00:04:24.716 } 00:04:24.716 ]' 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:24.716 { 00:04:24.716 "nbd_device": "/dev/nbd0", 00:04:24.716 "bdev_name": "Malloc0" 00:04:24.716 }, 00:04:24.716 { 00:04:24.716 "nbd_device": "/dev/nbd1", 00:04:24.716 "bdev_name": "Malloc1" 00:04:24.716 } 00:04:24.716 ]' 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:24.716 /dev/nbd1' 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:24.716 /dev/nbd1' 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:24.716 256+0 records in 00:04:24.716 256+0 records out 00:04:24.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124327 s, 84.3 MB/s 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:24.716 256+0 records in 00:04:24.716 256+0 records out 00:04:24.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211756 s, 49.5 MB/s 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:24.716 256+0 records in 00:04:24.716 256+0 records out 00:04:24.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019105 s, 54.9 MB/s 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.716 14:29:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.977 14:29:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.977 14:29:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.977 14:29:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.977 14:29:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.977 14:29:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.977 14:29:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.977 14:29:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.977 14:29:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.977 14:29:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.977 14:29:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.240 14:29:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.240 14:29:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.240 14:29:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.240 14:29:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.240 14:29:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.240 14:29:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.240 14:29:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.240 14:29:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.240 14:29:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.240 14:29:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.240 14:29:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.502 14:29:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:25.502 14:29:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:25.502 14:29:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.502 14:29:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:25.502 14:29:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.502 14:29:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:25.502 14:29:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:25.502 14:29:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:25.502 14:29:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:25.502 14:29:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:25.502 14:29:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:25.502 14:29:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:25.502 14:29:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:25.763 14:29:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:26.705 [2024-10-01 14:29:18.129522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.705 [2024-10-01 14:29:18.305080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.705 [2024-10-01 14:29:18.305229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.963 [2024-10-01 14:29:18.428619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.963 [2024-10-01 14:29:18.428871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.878 spdk_app_start Round 2 00:04:28.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.878 14:29:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:28.878 14:29:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:28.878 14:29:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57490 /var/tmp/spdk-nbd.sock 00:04:28.878 14:29:20 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 57490 ']' 00:04:28.878 14:29:20 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.878 14:29:20 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:28.878 14:29:20 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.878 14:29:20 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:28.878 14:29:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.878 14:29:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.878 14:29:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:28.878 14:29:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.141 Malloc0 00:04:29.141 14:29:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.403 Malloc1 00:04:29.403 14:29:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.403 14:29:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.664 /dev/nbd0 00:04:29.664 14:29:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.664 14:29:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.665 1+0 records in 00:04:29.665 1+0 records out 00:04:29.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355491 s, 11.5 MB/s 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:29.665 14:29:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:29.665 14:29:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.665 14:29:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.665 14:29:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:29.926 /dev/nbd1 00:04:29.926 14:29:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:29.926 14:29:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.926 1+0 records in 00:04:29.926 1+0 records out 00:04:29.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230352 s, 17.8 MB/s 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:29.926 14:29:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:29.926 14:29:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.926 14:29:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.926 14:29:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.926 14:29:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.926 14:29:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.186 { 00:04:30.186 "nbd_device": "/dev/nbd0", 00:04:30.186 "bdev_name": "Malloc0" 00:04:30.186 }, 00:04:30.186 { 00:04:30.186 "nbd_device": "/dev/nbd1", 00:04:30.186 "bdev_name": "Malloc1" 00:04:30.186 } 00:04:30.186 ]' 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.186 { 00:04:30.186 "nbd_device": "/dev/nbd0", 00:04:30.186 "bdev_name": "Malloc0" 00:04:30.186 }, 00:04:30.186 { 00:04:30.186 "nbd_device": "/dev/nbd1", 00:04:30.186 "bdev_name": "Malloc1" 00:04:30.186 } 00:04:30.186 ]' 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.186 /dev/nbd1' 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.186 /dev/nbd1' 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.186 256+0 records in 00:04:30.186 256+0 records out 00:04:30.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115344 s, 90.9 MB/s 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.186 256+0 records in 00:04:30.186 256+0 records out 00:04:30.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175157 s, 59.9 MB/s 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.186 256+0 records in 00:04:30.186 256+0 records out 00:04:30.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219869 s, 47.7 MB/s 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.186 14:29:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.187 14:29:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.187 14:29:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.187 14:29:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.187 14:29:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.187 14:29:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.187 14:29:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.187 14:29:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.447 14:29:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.447 14:29:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.447 14:29:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.447 14:29:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.447 14:29:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.447 14:29:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.447 14:29:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.447 14:29:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.447 14:29:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.447 14:29:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:30.708 14:29:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:30.708 14:29:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:30.708 14:29:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:30.708 14:29:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.708 14:29:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.708 14:29:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:30.708 14:29:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.708 14:29:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.708 14:29:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.708 14:29:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.708 14:29:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.018 14:29:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.018 14:29:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.018 14:29:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.018 14:29:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.018 14:29:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.018 14:29:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.018 14:29:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:31.018 14:29:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.018 14:29:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.018 14:29:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.018 14:29:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.018 14:29:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.018 14:29:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.313 14:29:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:32.256 [2024-10-01 14:29:23.642113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.256 [2024-10-01 14:29:23.824345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.256 [2024-10-01 14:29:23.824591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.516 [2024-10-01 14:29:23.945796] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.516 [2024-10-01 14:29:23.945875] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.432 14:29:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57490 /var/tmp/spdk-nbd.sock 00:04:34.432 14:29:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 57490 ']' 00:04:34.432 14:29:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.432 14:29:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.432 14:29:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.432 14:29:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.432 14:29:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.432 14:29:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.432 14:29:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:34.432 14:29:26 event.app_repeat -- event/event.sh@39 -- # killprocess 57490 00:04:34.432 14:29:26 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 57490 ']' 00:04:34.432 14:29:26 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 57490 00:04:34.432 14:29:26 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:34.432 14:29:26 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:34.432 14:29:26 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57490 00:04:34.694 killing process with pid 57490 00:04:34.694 14:29:26 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:34.694 14:29:26 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:34.694 14:29:26 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57490' 00:04:34.694 14:29:26 event.app_repeat -- common/autotest_common.sh@969 -- # kill 57490 00:04:34.694 14:29:26 event.app_repeat -- common/autotest_common.sh@974 -- # wait 57490 00:04:35.269 spdk_app_start is called in Round 0. 00:04:35.269 Shutdown signal received, stop current app iteration 00:04:35.269 Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 reinitialization... 00:04:35.269 spdk_app_start is called in Round 1. 00:04:35.269 Shutdown signal received, stop current app iteration 00:04:35.269 Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 reinitialization... 00:04:35.269 spdk_app_start is called in Round 2. 00:04:35.269 Shutdown signal received, stop current app iteration 00:04:35.269 Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 reinitialization... 00:04:35.269 spdk_app_start is called in Round 3. 00:04:35.269 Shutdown signal received, stop current app iteration 00:04:35.269 14:29:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:35.270 14:29:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:35.270 00:04:35.270 real 0m18.308s 00:04:35.270 user 0m38.995s 00:04:35.270 sys 0m2.250s 00:04:35.270 14:29:26 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.270 ************************************ 00:04:35.270 14:29:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.270 END TEST app_repeat 00:04:35.270 ************************************ 00:04:35.270 14:29:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:35.270 14:29:26 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:35.270 14:29:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.270 14:29:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.270 14:29:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.531 ************************************ 00:04:35.531 START TEST cpu_locks 00:04:35.531 ************************************ 00:04:35.531 14:29:26 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:35.531 * Looking for test storage... 00:04:35.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:35.531 14:29:27 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:35.531 14:29:27 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:04:35.531 14:29:27 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:35.531 14:29:27 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.531 14:29:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:35.531 14:29:27 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.531 14:29:27 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:35.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.531 --rc genhtml_branch_coverage=1 00:04:35.531 --rc genhtml_function_coverage=1 00:04:35.531 --rc genhtml_legend=1 00:04:35.531 --rc geninfo_all_blocks=1 00:04:35.531 --rc geninfo_unexecuted_blocks=1 00:04:35.531 00:04:35.531 ' 00:04:35.531 14:29:27 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:35.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.531 --rc genhtml_branch_coverage=1 00:04:35.531 --rc genhtml_function_coverage=1 00:04:35.531 --rc genhtml_legend=1 00:04:35.531 --rc geninfo_all_blocks=1 00:04:35.531 --rc geninfo_unexecuted_blocks=1 00:04:35.531 00:04:35.531 ' 00:04:35.531 14:29:27 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:35.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.531 --rc genhtml_branch_coverage=1 00:04:35.531 --rc genhtml_function_coverage=1 00:04:35.531 --rc genhtml_legend=1 00:04:35.531 --rc geninfo_all_blocks=1 00:04:35.531 --rc geninfo_unexecuted_blocks=1 00:04:35.531 00:04:35.531 ' 00:04:35.531 14:29:27 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:35.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.531 --rc genhtml_branch_coverage=1 00:04:35.531 --rc genhtml_function_coverage=1 00:04:35.531 --rc genhtml_legend=1 00:04:35.531 --rc geninfo_all_blocks=1 00:04:35.531 --rc geninfo_unexecuted_blocks=1 00:04:35.531 00:04:35.531 ' 00:04:35.532 14:29:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:35.532 14:29:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:35.532 14:29:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:35.532 14:29:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:35.532 14:29:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.532 14:29:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.532 14:29:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.532 ************************************ 00:04:35.532 START TEST default_locks 00:04:35.532 ************************************ 00:04:35.532 14:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:35.532 14:29:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57927 00:04:35.532 14:29:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 57927 00:04:35.532 14:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 57927 ']' 00:04:35.532 14:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.532 14:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.532 14:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.532 14:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.532 14:29:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.532 14:29:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.532 [2024-10-01 14:29:27.187210] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:35.532 [2024-10-01 14:29:27.187338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57927 ] 00:04:35.792 [2024-10-01 14:29:27.335597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.053 [2024-10-01 14:29:27.529995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.628 14:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.628 14:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:36.628 14:29:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 57927 00:04:36.628 14:29:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 57927 00:04:36.628 14:29:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.892 14:29:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 57927 00:04:36.892 14:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 57927 ']' 00:04:36.892 14:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 57927 00:04:36.892 14:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:36.892 14:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.892 14:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57927 00:04:36.892 14:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.892 14:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.892 14:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57927' 00:04:36.892 killing process with pid 57927 00:04:36.892 14:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 57927 00:04:36.892 14:29:28 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 57927 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57927 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57927 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 57927 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 57927 ']' 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.300 ERROR: process (pid: 57927) is no longer running 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.300 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (57927) - No such process 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:38.300 00:04:38.300 real 0m2.865s 00:04:38.300 user 0m2.858s 00:04:38.300 sys 0m0.451s 00:04:38.300 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.300 ************************************ 00:04:38.300 END TEST default_locks 00:04:38.300 ************************************ 00:04:38.301 14:29:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.563 14:29:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:38.563 14:29:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.563 14:29:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.563 14:29:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.563 ************************************ 00:04:38.563 START TEST default_locks_via_rpc 00:04:38.563 ************************************ 00:04:38.563 14:29:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:38.563 14:29:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57985 00:04:38.563 14:29:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 57985 00:04:38.563 14:29:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 57985 ']' 00:04:38.563 14:29:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.563 14:29:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.563 14:29:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.563 14:29:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.563 14:29:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.563 14:29:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.563 [2024-10-01 14:29:30.109075] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:38.563 [2024-10-01 14:29:30.109206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57985 ] 00:04:38.825 [2024-10-01 14:29:30.258856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.825 [2024-10-01 14:29:30.447011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 57985 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 57985 00:04:39.395 14:29:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.656 14:29:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 57985 00:04:39.656 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 57985 ']' 00:04:39.656 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 57985 00:04:39.656 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:39.656 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.656 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57985 00:04:39.656 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.656 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.656 killing process with pid 57985 00:04:39.656 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57985' 00:04:39.656 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 57985 00:04:39.656 14:29:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 57985 00:04:41.571 00:04:41.571 real 0m2.880s 00:04:41.571 user 0m2.864s 00:04:41.571 sys 0m0.473s 00:04:41.571 14:29:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.571 ************************************ 00:04:41.571 END TEST default_locks_via_rpc 00:04:41.571 ************************************ 00:04:41.571 14:29:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.571 14:29:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:41.571 14:29:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.571 14:29:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.571 14:29:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.571 ************************************ 00:04:41.571 START TEST non_locking_app_on_locked_coremask 00:04:41.571 ************************************ 00:04:41.571 14:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:41.571 14:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58048 00:04:41.571 14:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58048 /var/tmp/spdk.sock 00:04:41.571 14:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58048 ']' 00:04:41.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.571 14:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.571 14:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.571 14:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.571 14:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.571 14:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.571 14:29:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.571 [2024-10-01 14:29:33.053541] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:41.571 [2024-10-01 14:29:33.053665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58048 ] 00:04:41.571 [2024-10-01 14:29:33.196340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.833 [2024-10-01 14:29:33.381477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.406 14:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:42.406 14:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:42.406 14:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58064 00:04:42.406 14:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58064 /var/tmp/spdk2.sock 00:04:42.406 14:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58064 ']' 00:04:42.406 14:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:42.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:42.406 14:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.406 14:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:42.406 14:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:42.406 14:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.406 14:29:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.406 [2024-10-01 14:29:34.055934] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:42.406 [2024-10-01 14:29:34.056055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58064 ] 00:04:42.667 [2024-10-01 14:29:34.210166] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.667 [2024-10-01 14:29:34.210226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.928 [2024-10-01 14:29:34.571842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.314 14:29:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.314 14:29:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:44.314 14:29:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58048 00:04:44.314 14:29:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58048 00:04:44.314 14:29:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.576 14:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58048 00:04:44.576 14:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58048 ']' 00:04:44.576 14:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58048 00:04:44.576 14:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:44.576 14:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:44.576 14:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58048 00:04:44.576 14:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:44.576 14:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:44.576 killing process with pid 58048 00:04:44.576 14:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58048' 00:04:44.576 14:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58048 00:04:44.576 14:29:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58048 00:04:47.925 14:29:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58064 00:04:47.925 14:29:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58064 ']' 00:04:47.925 14:29:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58064 00:04:47.925 14:29:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:47.925 14:29:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.925 14:29:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58064 00:04:47.925 14:29:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.925 14:29:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.925 killing process with pid 58064 00:04:47.925 14:29:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58064' 00:04:47.925 14:29:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58064 00:04:47.925 14:29:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58064 00:04:49.312 00:04:49.312 real 0m7.688s 00:04:49.312 user 0m7.907s 00:04:49.312 sys 0m0.924s 00:04:49.312 14:29:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.312 ************************************ 00:04:49.312 END TEST non_locking_app_on_locked_coremask 00:04:49.312 ************************************ 00:04:49.312 14:29:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.312 14:29:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:49.312 14:29:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.312 14:29:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.312 14:29:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.312 ************************************ 00:04:49.312 START TEST locking_app_on_unlocked_coremask 00:04:49.312 ************************************ 00:04:49.312 14:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:49.312 14:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58172 00:04:49.312 14:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58172 /var/tmp/spdk.sock 00:04:49.312 14:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58172 ']' 00:04:49.312 14:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.312 14:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.312 14:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:49.312 14:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.312 14:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.312 14:29:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.312 [2024-10-01 14:29:40.805965] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:49.312 [2024-10-01 14:29:40.806497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58172 ] 00:04:49.312 [2024-10-01 14:29:40.954872] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:49.312 [2024-10-01 14:29:40.955055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.573 [2024-10-01 14:29:41.143735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.198 14:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.198 14:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:50.198 14:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58188 00:04:50.198 14:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58188 /var/tmp/spdk2.sock 00:04:50.198 14:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58188 ']' 00:04:50.198 14:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.198 14:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:50.198 14:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.198 14:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.198 14:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.198 14:29:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.198 [2024-10-01 14:29:41.813773] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:50.198 [2024-10-01 14:29:41.814337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58188 ] 00:04:50.459 [2024-10-01 14:29:41.967303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.720 [2024-10-01 14:29:42.359729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.106 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.106 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:52.106 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58188 00:04:52.106 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58188 00:04:52.106 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.368 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58172 00:04:52.368 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58172 ']' 00:04:52.368 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58172 00:04:52.368 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:52.368 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.368 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58172 00:04:52.368 killing process with pid 58172 00:04:52.368 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.368 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.368 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58172' 00:04:52.368 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58172 00:04:52.368 14:29:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58172 00:04:55.713 14:29:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58188 00:04:55.713 14:29:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58188 ']' 00:04:55.713 14:29:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58188 00:04:55.713 14:29:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:55.713 14:29:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:55.713 14:29:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58188 00:04:55.713 killing process with pid 58188 00:04:55.713 14:29:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:55.713 14:29:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:55.713 14:29:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58188' 00:04:55.713 14:29:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58188 00:04:55.713 14:29:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58188 00:04:57.628 ************************************ 00:04:57.628 END TEST locking_app_on_unlocked_coremask 00:04:57.628 ************************************ 00:04:57.628 00:04:57.628 real 0m8.151s 00:04:57.628 user 0m8.344s 00:04:57.628 sys 0m0.935s 00:04:57.628 14:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.628 14:29:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.628 14:29:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:57.628 14:29:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.628 14:29:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.628 14:29:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.628 ************************************ 00:04:57.628 START TEST locking_app_on_locked_coremask 00:04:57.628 ************************************ 00:04:57.628 14:29:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:04:57.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.628 14:29:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58301 00:04:57.628 14:29:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58301 /var/tmp/spdk.sock 00:04:57.628 14:29:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58301 ']' 00:04:57.628 14:29:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.628 14:29:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.628 14:29:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.628 14:29:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.628 14:29:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.628 14:29:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.628 [2024-10-01 14:29:49.040107] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:57.628 [2024-10-01 14:29:49.040238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58301 ] 00:04:57.628 [2024-10-01 14:29:49.190672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.889 [2024-10-01 14:29:49.384049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58317 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58317 /var/tmp/spdk2.sock 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58317 /var/tmp/spdk2.sock 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58317 /var/tmp/spdk2.sock 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58317 ']' 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.460 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.460 [2024-10-01 14:29:50.077381] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:58.461 [2024-10-01 14:29:50.077687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58317 ] 00:04:58.813 [2024-10-01 14:29:50.231489] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58301 has claimed it. 00:04:58.813 [2024-10-01 14:29:50.231572] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:59.098 ERROR: process (pid: 58317) is no longer running 00:04:59.098 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58317) - No such process 00:04:59.098 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.098 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:59.098 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:59.098 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:59.098 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:59.098 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:59.098 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58301 00:04:59.098 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.098 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58301 00:04:59.360 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58301 00:04:59.360 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58301 ']' 00:04:59.360 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58301 00:04:59.360 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:59.360 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.360 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58301 00:04:59.360 killing process with pid 58301 00:04:59.360 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.360 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.360 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58301' 00:04:59.360 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58301 00:04:59.360 14:29:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58301 00:05:01.275 ************************************ 00:05:01.275 END TEST locking_app_on_locked_coremask 00:05:01.275 ************************************ 00:05:01.275 00:05:01.275 real 0m3.646s 00:05:01.275 user 0m3.793s 00:05:01.275 sys 0m0.618s 00:05:01.275 14:29:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.275 14:29:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.275 14:29:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:01.275 14:29:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.275 14:29:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.275 14:29:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.275 ************************************ 00:05:01.275 START TEST locking_overlapped_coremask 00:05:01.275 ************************************ 00:05:01.275 14:29:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:01.275 14:29:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58375 00:05:01.275 14:29:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58375 /var/tmp/spdk.sock 00:05:01.275 14:29:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 58375 ']' 00:05:01.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.275 14:29:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.275 14:29:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.275 14:29:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.275 14:29:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:01.275 14:29:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.275 14:29:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.275 [2024-10-01 14:29:52.740346] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:01.275 [2024-10-01 14:29:52.740465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58375 ] 00:05:01.275 [2024-10-01 14:29:52.891413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.537 [2024-10-01 14:29:53.085150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.537 [2024-10-01 14:29:53.085403] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.537 [2024-10-01 14:29:53.085406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58393 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58393 /var/tmp/spdk2.sock 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58393 /var/tmp/spdk2.sock 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58393 /var/tmp/spdk2.sock 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 58393 ']' 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.111 14:29:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.111 [2024-10-01 14:29:53.776086] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:02.112 [2024-10-01 14:29:53.776533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58393 ] 00:05:02.372 [2024-10-01 14:29:53.936909] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58375 has claimed it. 00:05:02.372 [2024-10-01 14:29:53.936983] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:02.943 ERROR: process (pid: 58393) is no longer running 00:05:02.943 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58393) - No such process 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58375 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 58375 ']' 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 58375 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58375 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.943 killing process with pid 58375 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58375' 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 58375 00:05:02.943 14:29:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 58375 00:05:04.861 00:05:04.861 real 0m3.406s 00:05:04.861 user 0m8.903s 00:05:04.861 sys 0m0.453s 00:05:04.861 ************************************ 00:05:04.861 END TEST locking_overlapped_coremask 00:05:04.861 ************************************ 00:05:04.861 14:29:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.861 14:29:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.862 14:29:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:04.862 14:29:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.862 14:29:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.862 14:29:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.862 ************************************ 00:05:04.862 START TEST locking_overlapped_coremask_via_rpc 00:05:04.862 ************************************ 00:05:04.862 14:29:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:04.862 14:29:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58452 00:05:04.862 14:29:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:04.862 14:29:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58452 /var/tmp/spdk.sock 00:05:04.862 14:29:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58452 ']' 00:05:04.862 14:29:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.862 14:29:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.862 14:29:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.862 14:29:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.862 14:29:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.862 [2024-10-01 14:29:56.217199] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:04.862 [2024-10-01 14:29:56.217325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58452 ] 00:05:04.862 [2024-10-01 14:29:56.364876] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.862 [2024-10-01 14:29:56.364924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:05.122 [2024-10-01 14:29:56.556254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.122 [2024-10-01 14:29:56.556640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.122 [2024-10-01 14:29:56.556767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.694 14:29:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.694 14:29:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:05.694 14:29:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58470 00:05:05.694 14:29:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58470 /var/tmp/spdk2.sock 00:05:05.694 14:29:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:05.694 14:29:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58470 ']' 00:05:05.694 14:29:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.694 14:29:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.694 14:29:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.694 14:29:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.694 14:29:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.694 [2024-10-01 14:29:57.225949] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:05.694 [2024-10-01 14:29:57.226091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58470 ] 00:05:05.953 [2024-10-01 14:29:57.384147] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.953 [2024-10-01 14:29:57.384194] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:06.215 [2024-10-01 14:29:57.767212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.215 [2024-10-01 14:29:57.767719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.215 [2024-10-01 14:29:57.767746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:05:07.666 14:29:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.666 14:29:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:07.666 14:29:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:07.666 14:29:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.666 14:29:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.666 14:29:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.666 14:29:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:07.666 14:29:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:07.666 14:29:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:07.666 14:29:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:07.666 14:29:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.666 14:29:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:07.666 14:29:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.666 [2024-10-01 14:29:59.009866] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58452 has claimed it. 00:05:07.666 request: 00:05:07.666 { 00:05:07.666 "method": "framework_enable_cpumask_locks", 00:05:07.666 "req_id": 1 00:05:07.666 } 00:05:07.666 Got JSON-RPC error response 00:05:07.666 response: 00:05:07.666 { 00:05:07.666 "code": -32603, 00:05:07.666 "message": "Failed to claim CPU core: 2" 00:05:07.666 } 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58452 /var/tmp/spdk.sock 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58452 ']' 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58470 /var/tmp/spdk2.sock 00:05:07.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58470 ']' 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.666 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.927 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.927 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:07.927 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:07.927 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:07.927 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:07.927 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:07.927 00:05:07.927 real 0m3.326s 00:05:07.927 user 0m1.120s 00:05:07.927 sys 0m0.112s 00:05:07.927 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.927 ************************************ 00:05:07.927 END TEST locking_overlapped_coremask_via_rpc 00:05:07.927 ************************************ 00:05:07.927 14:29:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.927 14:29:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:07.927 14:29:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58452 ]] 00:05:07.927 14:29:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58452 00:05:07.927 14:29:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 58452 ']' 00:05:07.927 14:29:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 58452 00:05:07.927 14:29:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:07.927 14:29:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.927 14:29:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58452 00:05:07.927 14:29:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.927 14:29:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.927 killing process with pid 58452 00:05:07.927 14:29:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58452' 00:05:07.927 14:29:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 58452 00:05:07.927 14:29:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 58452 00:05:09.869 14:30:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58470 ]] 00:05:09.869 14:30:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58470 00:05:09.869 14:30:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 58470 ']' 00:05:09.869 14:30:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 58470 00:05:09.869 14:30:01 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:09.869 14:30:01 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.869 14:30:01 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58470 00:05:09.869 killing process with pid 58470 00:05:09.869 14:30:01 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:09.869 14:30:01 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:09.869 14:30:01 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58470' 00:05:09.869 14:30:01 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 58470 00:05:09.869 14:30:01 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 58470 00:05:11.254 14:30:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:11.254 14:30:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:11.254 14:30:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58452 ]] 00:05:11.254 14:30:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58452 00:05:11.254 14:30:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 58452 ']' 00:05:11.254 14:30:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 58452 00:05:11.254 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (58452) - No such process 00:05:11.254 Process with pid 58452 is not found 00:05:11.254 14:30:02 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 58452 is not found' 00:05:11.254 14:30:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58470 ]] 00:05:11.254 Process with pid 58470 is not found 00:05:11.254 14:30:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58470 00:05:11.254 14:30:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 58470 ']' 00:05:11.254 14:30:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 58470 00:05:11.254 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (58470) - No such process 00:05:11.254 14:30:02 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 58470 is not found' 00:05:11.254 14:30:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:11.254 00:05:11.254 real 0m35.938s 00:05:11.254 user 1m0.525s 00:05:11.254 sys 0m4.815s 00:05:11.254 14:30:02 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.254 ************************************ 00:05:11.254 END TEST cpu_locks 00:05:11.254 ************************************ 00:05:11.254 14:30:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.516 00:05:11.516 real 1m5.683s 00:05:11.516 user 1m58.697s 00:05:11.516 sys 0m7.896s 00:05:11.516 14:30:02 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.516 ************************************ 00:05:11.516 END TEST event 00:05:11.516 ************************************ 00:05:11.516 14:30:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.516 14:30:02 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:11.516 14:30:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.516 14:30:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.516 14:30:02 -- common/autotest_common.sh@10 -- # set +x 00:05:11.516 ************************************ 00:05:11.516 START TEST thread 00:05:11.516 ************************************ 00:05:11.516 14:30:03 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:11.516 * Looking for test storage... 00:05:11.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:11.516 14:30:03 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:11.516 14:30:03 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:11.516 14:30:03 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:11.516 14:30:03 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:11.516 14:30:03 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.516 14:30:03 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.516 14:30:03 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.516 14:30:03 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.516 14:30:03 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.516 14:30:03 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.516 14:30:03 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.516 14:30:03 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.516 14:30:03 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.516 14:30:03 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.516 14:30:03 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.516 14:30:03 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:11.516 14:30:03 thread -- scripts/common.sh@345 -- # : 1 00:05:11.516 14:30:03 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.516 14:30:03 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.516 14:30:03 thread -- scripts/common.sh@365 -- # decimal 1 00:05:11.516 14:30:03 thread -- scripts/common.sh@353 -- # local d=1 00:05:11.516 14:30:03 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.516 14:30:03 thread -- scripts/common.sh@355 -- # echo 1 00:05:11.516 14:30:03 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.516 14:30:03 thread -- scripts/common.sh@366 -- # decimal 2 00:05:11.516 14:30:03 thread -- scripts/common.sh@353 -- # local d=2 00:05:11.516 14:30:03 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.516 14:30:03 thread -- scripts/common.sh@355 -- # echo 2 00:05:11.516 14:30:03 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.516 14:30:03 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.516 14:30:03 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.516 14:30:03 thread -- scripts/common.sh@368 -- # return 0 00:05:11.516 14:30:03 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.516 14:30:03 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:11.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.516 --rc genhtml_branch_coverage=1 00:05:11.516 --rc genhtml_function_coverage=1 00:05:11.516 --rc genhtml_legend=1 00:05:11.516 --rc geninfo_all_blocks=1 00:05:11.516 --rc geninfo_unexecuted_blocks=1 00:05:11.516 00:05:11.516 ' 00:05:11.516 14:30:03 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:11.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.516 --rc genhtml_branch_coverage=1 00:05:11.516 --rc genhtml_function_coverage=1 00:05:11.516 --rc genhtml_legend=1 00:05:11.516 --rc geninfo_all_blocks=1 00:05:11.516 --rc geninfo_unexecuted_blocks=1 00:05:11.516 00:05:11.516 ' 00:05:11.516 14:30:03 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:11.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.516 --rc genhtml_branch_coverage=1 00:05:11.517 --rc genhtml_function_coverage=1 00:05:11.517 --rc genhtml_legend=1 00:05:11.517 --rc geninfo_all_blocks=1 00:05:11.517 --rc geninfo_unexecuted_blocks=1 00:05:11.517 00:05:11.517 ' 00:05:11.517 14:30:03 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:11.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.517 --rc genhtml_branch_coverage=1 00:05:11.517 --rc genhtml_function_coverage=1 00:05:11.517 --rc genhtml_legend=1 00:05:11.517 --rc geninfo_all_blocks=1 00:05:11.517 --rc geninfo_unexecuted_blocks=1 00:05:11.517 00:05:11.517 ' 00:05:11.517 14:30:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:11.517 14:30:03 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:11.517 14:30:03 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.517 14:30:03 thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.517 ************************************ 00:05:11.517 START TEST thread_poller_perf 00:05:11.517 ************************************ 00:05:11.517 14:30:03 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:11.779 [2024-10-01 14:30:03.202462] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:11.779 [2024-10-01 14:30:03.202583] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58641 ] 00:05:11.779 [2024-10-01 14:30:03.352456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.040 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:12.040 [2024-10-01 14:30:03.541093] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.483 ====================================== 00:05:13.483 busy:2610633624 (cyc) 00:05:13.483 total_run_count: 305000 00:05:13.483 tsc_hz: 2600000000 (cyc) 00:05:13.483 ====================================== 00:05:13.483 poller_cost: 8559 (cyc), 3291 (nsec) 00:05:13.483 00:05:13.483 real 0m1.653s 00:05:13.483 user 0m1.458s 00:05:13.483 sys 0m0.084s 00:05:13.483 ************************************ 00:05:13.483 END TEST thread_poller_perf 00:05:13.483 14:30:04 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.483 14:30:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.483 ************************************ 00:05:13.483 14:30:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:13.483 14:30:04 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:13.483 14:30:04 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.483 14:30:04 thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.483 ************************************ 00:05:13.483 START TEST thread_poller_perf 00:05:13.483 ************************************ 00:05:13.483 14:30:04 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:13.483 [2024-10-01 14:30:04.928793] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:13.483 [2024-10-01 14:30:04.928940] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58677 ] 00:05:13.483 [2024-10-01 14:30:05.080136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.744 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:13.744 [2024-10-01 14:30:05.267863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.134 ====================================== 00:05:15.134 busy:2603019002 (cyc) 00:05:15.134 total_run_count: 3962000 00:05:15.134 tsc_hz: 2600000000 (cyc) 00:05:15.134 ====================================== 00:05:15.134 poller_cost: 656 (cyc), 252 (nsec) 00:05:15.134 00:05:15.134 real 0m1.647s 00:05:15.134 user 0m1.452s 00:05:15.134 sys 0m0.086s 00:05:15.134 14:30:06 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.134 14:30:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.134 ************************************ 00:05:15.134 END TEST thread_poller_perf 00:05:15.134 ************************************ 00:05:15.134 14:30:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:15.134 00:05:15.134 real 0m3.579s 00:05:15.134 user 0m3.027s 00:05:15.134 sys 0m0.297s 00:05:15.134 14:30:06 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.134 ************************************ 00:05:15.134 END TEST thread 00:05:15.134 ************************************ 00:05:15.134 14:30:06 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.134 14:30:06 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:15.134 14:30:06 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:15.134 14:30:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.134 14:30:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.134 14:30:06 -- common/autotest_common.sh@10 -- # set +x 00:05:15.134 ************************************ 00:05:15.134 START TEST app_cmdline 00:05:15.134 ************************************ 00:05:15.134 14:30:06 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:15.134 * Looking for test storage... 00:05:15.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:15.134 14:30:06 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:15.134 14:30:06 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:15.134 14:30:06 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:15.134 14:30:06 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.134 14:30:06 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:15.134 14:30:06 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.134 14:30:06 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.134 --rc genhtml_branch_coverage=1 00:05:15.134 --rc genhtml_function_coverage=1 00:05:15.134 --rc genhtml_legend=1 00:05:15.134 --rc geninfo_all_blocks=1 00:05:15.134 --rc geninfo_unexecuted_blocks=1 00:05:15.134 00:05:15.134 ' 00:05:15.134 14:30:06 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.134 --rc genhtml_branch_coverage=1 00:05:15.134 --rc genhtml_function_coverage=1 00:05:15.134 --rc genhtml_legend=1 00:05:15.134 --rc geninfo_all_blocks=1 00:05:15.134 --rc geninfo_unexecuted_blocks=1 00:05:15.134 00:05:15.134 ' 00:05:15.134 14:30:06 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.134 --rc genhtml_branch_coverage=1 00:05:15.134 --rc genhtml_function_coverage=1 00:05:15.134 --rc genhtml_legend=1 00:05:15.134 --rc geninfo_all_blocks=1 00:05:15.134 --rc geninfo_unexecuted_blocks=1 00:05:15.134 00:05:15.134 ' 00:05:15.134 14:30:06 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.134 --rc genhtml_branch_coverage=1 00:05:15.134 --rc genhtml_function_coverage=1 00:05:15.134 --rc genhtml_legend=1 00:05:15.134 --rc geninfo_all_blocks=1 00:05:15.134 --rc geninfo_unexecuted_blocks=1 00:05:15.134 00:05:15.134 ' 00:05:15.135 14:30:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:15.135 14:30:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=58761 00:05:15.135 14:30:06 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:15.135 14:30:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 58761 00:05:15.135 14:30:06 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 58761 ']' 00:05:15.135 14:30:06 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.135 14:30:06 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.135 14:30:06 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.135 14:30:06 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.135 14:30:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:15.397 [2024-10-01 14:30:06.863953] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:15.397 [2024-10-01 14:30:06.864588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58761 ] 00:05:15.397 [2024-10-01 14:30:07.028578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.657 [2024-10-01 14:30:07.267997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.230 14:30:07 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.230 14:30:07 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:16.230 14:30:07 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:16.491 { 00:05:16.491 "version": "SPDK v25.01-pre git sha1 1c027d356", 00:05:16.491 "fields": { 00:05:16.491 "major": 25, 00:05:16.491 "minor": 1, 00:05:16.491 "patch": 0, 00:05:16.491 "suffix": "-pre", 00:05:16.491 "commit": "1c027d356" 00:05:16.491 } 00:05:16.491 } 00:05:16.491 14:30:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:16.491 14:30:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:16.491 14:30:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:16.491 14:30:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:16.491 14:30:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:16.491 14:30:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:16.491 14:30:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.491 14:30:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:16.491 14:30:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:16.491 14:30:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:16.491 14:30:08 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:16.752 request: 00:05:16.752 { 00:05:16.752 "method": "env_dpdk_get_mem_stats", 00:05:16.752 "req_id": 1 00:05:16.752 } 00:05:16.753 Got JSON-RPC error response 00:05:16.753 response: 00:05:16.753 { 00:05:16.753 "code": -32601, 00:05:16.753 "message": "Method not found" 00:05:16.753 } 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:16.753 14:30:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 58761 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 58761 ']' 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 58761 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58761 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.753 killing process with pid 58761 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58761' 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@969 -- # kill 58761 00:05:16.753 14:30:08 app_cmdline -- common/autotest_common.sh@974 -- # wait 58761 00:05:18.665 00:05:18.665 real 0m3.365s 00:05:18.665 user 0m3.661s 00:05:18.665 sys 0m0.455s 00:05:18.665 ************************************ 00:05:18.665 END TEST app_cmdline 00:05:18.665 ************************************ 00:05:18.665 14:30:09 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.665 14:30:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:18.665 14:30:10 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:18.665 14:30:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.665 14:30:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.665 14:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:18.665 ************************************ 00:05:18.665 START TEST version 00:05:18.665 ************************************ 00:05:18.665 14:30:10 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:18.665 * Looking for test storage... 00:05:18.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:18.665 14:30:10 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.665 14:30:10 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.665 14:30:10 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:18.665 14:30:10 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:18.665 14:30:10 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.665 14:30:10 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.665 14:30:10 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.665 14:30:10 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.665 14:30:10 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.665 14:30:10 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.665 14:30:10 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.665 14:30:10 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.665 14:30:10 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.665 14:30:10 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.665 14:30:10 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.665 14:30:10 version -- scripts/common.sh@344 -- # case "$op" in 00:05:18.665 14:30:10 version -- scripts/common.sh@345 -- # : 1 00:05:18.665 14:30:10 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.665 14:30:10 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.665 14:30:10 version -- scripts/common.sh@365 -- # decimal 1 00:05:18.665 14:30:10 version -- scripts/common.sh@353 -- # local d=1 00:05:18.665 14:30:10 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.665 14:30:10 version -- scripts/common.sh@355 -- # echo 1 00:05:18.665 14:30:10 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.665 14:30:10 version -- scripts/common.sh@366 -- # decimal 2 00:05:18.665 14:30:10 version -- scripts/common.sh@353 -- # local d=2 00:05:18.665 14:30:10 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.665 14:30:10 version -- scripts/common.sh@355 -- # echo 2 00:05:18.665 14:30:10 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.665 14:30:10 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.665 14:30:10 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.665 14:30:10 version -- scripts/common.sh@368 -- # return 0 00:05:18.665 14:30:10 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.665 14:30:10 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:18.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.665 --rc genhtml_branch_coverage=1 00:05:18.665 --rc genhtml_function_coverage=1 00:05:18.665 --rc genhtml_legend=1 00:05:18.665 --rc geninfo_all_blocks=1 00:05:18.665 --rc geninfo_unexecuted_blocks=1 00:05:18.665 00:05:18.665 ' 00:05:18.665 14:30:10 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:18.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.665 --rc genhtml_branch_coverage=1 00:05:18.665 --rc genhtml_function_coverage=1 00:05:18.665 --rc genhtml_legend=1 00:05:18.665 --rc geninfo_all_blocks=1 00:05:18.665 --rc geninfo_unexecuted_blocks=1 00:05:18.665 00:05:18.665 ' 00:05:18.665 14:30:10 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:18.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.665 --rc genhtml_branch_coverage=1 00:05:18.665 --rc genhtml_function_coverage=1 00:05:18.665 --rc genhtml_legend=1 00:05:18.665 --rc geninfo_all_blocks=1 00:05:18.665 --rc geninfo_unexecuted_blocks=1 00:05:18.665 00:05:18.665 ' 00:05:18.665 14:30:10 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:18.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.666 --rc genhtml_branch_coverage=1 00:05:18.666 --rc genhtml_function_coverage=1 00:05:18.666 --rc genhtml_legend=1 00:05:18.666 --rc geninfo_all_blocks=1 00:05:18.666 --rc geninfo_unexecuted_blocks=1 00:05:18.666 00:05:18.666 ' 00:05:18.666 14:30:10 version -- app/version.sh@17 -- # get_header_version major 00:05:18.666 14:30:10 version -- app/version.sh@14 -- # cut -f2 00:05:18.666 14:30:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:18.666 14:30:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:18.666 14:30:10 version -- app/version.sh@17 -- # major=25 00:05:18.666 14:30:10 version -- app/version.sh@18 -- # get_header_version minor 00:05:18.666 14:30:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:18.666 14:30:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:18.666 14:30:10 version -- app/version.sh@14 -- # cut -f2 00:05:18.666 14:30:10 version -- app/version.sh@18 -- # minor=1 00:05:18.666 14:30:10 version -- app/version.sh@19 -- # get_header_version patch 00:05:18.666 14:30:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:18.666 14:30:10 version -- app/version.sh@14 -- # cut -f2 00:05:18.666 14:30:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:18.666 14:30:10 version -- app/version.sh@19 -- # patch=0 00:05:18.666 14:30:10 version -- app/version.sh@20 -- # get_header_version suffix 00:05:18.666 14:30:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:18.666 14:30:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:18.666 14:30:10 version -- app/version.sh@14 -- # cut -f2 00:05:18.666 14:30:10 version -- app/version.sh@20 -- # suffix=-pre 00:05:18.666 14:30:10 version -- app/version.sh@22 -- # version=25.1 00:05:18.666 14:30:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:18.666 14:30:10 version -- app/version.sh@28 -- # version=25.1rc0 00:05:18.666 14:30:10 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:18.666 14:30:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:18.666 14:30:10 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:18.666 14:30:10 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:18.666 00:05:18.666 real 0m0.205s 00:05:18.666 user 0m0.126s 00:05:18.666 sys 0m0.101s 00:05:18.666 ************************************ 00:05:18.666 END TEST version 00:05:18.666 14:30:10 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.666 14:30:10 version -- common/autotest_common.sh@10 -- # set +x 00:05:18.666 ************************************ 00:05:18.666 14:30:10 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:18.666 14:30:10 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:05:18.666 14:30:10 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:05:18.666 14:30:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.666 14:30:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.666 14:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:18.666 ************************************ 00:05:18.666 START TEST bdev_raid 00:05:18.666 ************************************ 00:05:18.666 14:30:10 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:05:18.927 * Looking for test storage... 00:05:18.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:18.927 14:30:10 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.927 14:30:10 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.927 14:30:10 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:18.927 14:30:10 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@345 -- # : 1 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.927 14:30:10 bdev_raid -- scripts/common.sh@368 -- # return 0 00:05:18.927 14:30:10 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.927 14:30:10 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:18.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.927 --rc genhtml_branch_coverage=1 00:05:18.927 --rc genhtml_function_coverage=1 00:05:18.927 --rc genhtml_legend=1 00:05:18.927 --rc geninfo_all_blocks=1 00:05:18.927 --rc geninfo_unexecuted_blocks=1 00:05:18.927 00:05:18.927 ' 00:05:18.927 14:30:10 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:18.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.927 --rc genhtml_branch_coverage=1 00:05:18.927 --rc genhtml_function_coverage=1 00:05:18.927 --rc genhtml_legend=1 00:05:18.927 --rc geninfo_all_blocks=1 00:05:18.927 --rc geninfo_unexecuted_blocks=1 00:05:18.927 00:05:18.927 ' 00:05:18.927 14:30:10 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:18.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.927 --rc genhtml_branch_coverage=1 00:05:18.927 --rc genhtml_function_coverage=1 00:05:18.927 --rc genhtml_legend=1 00:05:18.927 --rc geninfo_all_blocks=1 00:05:18.927 --rc geninfo_unexecuted_blocks=1 00:05:18.927 00:05:18.927 ' 00:05:18.927 14:30:10 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:18.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.927 --rc genhtml_branch_coverage=1 00:05:18.927 --rc genhtml_function_coverage=1 00:05:18.927 --rc genhtml_legend=1 00:05:18.927 --rc geninfo_all_blocks=1 00:05:18.927 --rc geninfo_unexecuted_blocks=1 00:05:18.927 00:05:18.927 ' 00:05:18.927 14:30:10 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:18.927 14:30:10 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:05:18.927 14:30:10 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:05:18.927 14:30:10 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:05:18.927 14:30:10 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:05:18.927 14:30:10 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:05:18.927 14:30:10 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:05:18.927 14:30:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.927 14:30:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.927 14:30:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:18.927 ************************************ 00:05:18.927 START TEST raid1_resize_data_offset_test 00:05:18.927 ************************************ 00:05:18.927 14:30:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:05:18.927 Process raid pid: 58943 00:05:18.927 14:30:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=58943 00:05:18.927 14:30:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 58943' 00:05:18.927 14:30:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 58943 00:05:18.927 14:30:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 58943 ']' 00:05:18.928 14:30:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.928 14:30:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.928 14:30:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.928 14:30:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.928 14:30:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.928 14:30:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:18.928 [2024-10-01 14:30:10.549922] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:18.928 [2024-10-01 14:30:10.550046] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:19.189 [2024-10-01 14:30:10.700121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.449 [2024-10-01 14:30:10.891604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.449 [2024-10-01 14:30:11.030751] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:19.449 [2024-10-01 14:30:11.030791] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.019 malloc0 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.019 malloc1 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.019 null0 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.019 [2024-10-01 14:30:11.536377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:05:20.019 [2024-10-01 14:30:11.538219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:05:20.019 [2024-10-01 14:30:11.538284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:05:20.019 [2024-10-01 14:30:11.538416] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:20.019 [2024-10-01 14:30:11.538428] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:05:20.019 [2024-10-01 14:30:11.538695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:05:20.019 [2024-10-01 14:30:11.538854] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:20.019 [2024-10-01 14:30:11.538865] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:05:20.019 [2024-10-01 14:30:11.539010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.019 [2024-10-01 14:30:11.580373] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.019 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.281 malloc2 00:05:20.281 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.281 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:05:20.281 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.281 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.281 [2024-10-01 14:30:11.947999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:05:20.281 [2024-10-01 14:30:11.958873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:20.281 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.281 [2024-10-01 14:30:11.960669] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:05:20.281 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:20.281 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.281 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.281 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:05:20.543 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.543 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:05:20.543 14:30:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 58943 00:05:20.543 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 58943 ']' 00:05:20.544 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 58943 00:05:20.544 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:05:20.544 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.544 14:30:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58943 00:05:20.544 killing process with pid 58943 00:05:20.544 14:30:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.544 14:30:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.544 14:30:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58943' 00:05:20.544 14:30:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 58943 00:05:20.544 14:30:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 58943 00:05:20.544 [2024-10-01 14:30:12.023128] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:20.544 [2024-10-01 14:30:12.023408] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:05:20.544 [2024-10-01 14:30:12.023465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:20.544 [2024-10-01 14:30:12.023481] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:05:20.544 [2024-10-01 14:30:12.042068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:20.544 [2024-10-01 14:30:12.042375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:20.544 [2024-10-01 14:30:12.042392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:05:21.487 [2024-10-01 14:30:13.135803] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:22.428 14:30:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:05:22.428 00:05:22.428 real 0m3.473s 00:05:22.428 user 0m3.441s 00:05:22.428 sys 0m0.403s 00:05:22.428 ************************************ 00:05:22.428 END TEST raid1_resize_data_offset_test 00:05:22.428 ************************************ 00:05:22.428 14:30:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.428 14:30:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.428 14:30:14 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:05:22.428 14:30:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:22.428 14:30:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.428 14:30:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:22.428 ************************************ 00:05:22.428 START TEST raid0_resize_superblock_test 00:05:22.428 ************************************ 00:05:22.428 14:30:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:05:22.428 14:30:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:05:22.428 Process raid pid: 59010 00:05:22.428 14:30:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59010 00:05:22.428 14:30:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59010' 00:05:22.428 14:30:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59010 00:05:22.428 14:30:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 59010 ']' 00:05:22.428 14:30:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.428 14:30:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.428 14:30:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.428 14:30:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.428 14:30:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.428 14:30:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:22.428 [2024-10-01 14:30:14.075645] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:22.428 [2024-10-01 14:30:14.075783] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:22.689 [2024-10-01 14:30:14.219080] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.950 [2024-10-01 14:30:14.403453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.950 [2024-10-01 14:30:14.539625] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:22.950 [2024-10-01 14:30:14.539663] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:23.522 14:30:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.522 14:30:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:05:23.522 14:30:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:05:23.522 14:30:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.522 14:30:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:23.781 malloc0 00:05:23.781 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.781 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:05:23.781 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.781 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:23.781 [2024-10-01 14:30:15.290494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:05:23.781 [2024-10-01 14:30:15.290559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:23.781 [2024-10-01 14:30:15.290580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:05:23.781 [2024-10-01 14:30:15.290591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:23.781 [2024-10-01 14:30:15.292693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:23.781 [2024-10-01 14:30:15.292758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:05:23.781 pt0 00:05:23.781 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.781 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:05:23.781 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.781 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:23.781 2433ee5e-5516-41a1-a325-f69606144ffc 00:05:23.781 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:23.782 cd044a1d-e56d-4f17-a338-f55423ab04cb 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:23.782 44ea94ef-2725-4f53-a95c-e293b375bf35 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:23.782 [2024-10-01 14:30:15.379581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev cd044a1d-e56d-4f17-a338-f55423ab04cb is claimed 00:05:23.782 [2024-10-01 14:30:15.379665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 44ea94ef-2725-4f53-a95c-e293b375bf35 is claimed 00:05:23.782 [2024-10-01 14:30:15.379806] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:23.782 [2024-10-01 14:30:15.379821] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:05:23.782 [2024-10-01 14:30:15.380060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:23.782 [2024-10-01 14:30:15.380225] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:23.782 [2024-10-01 14:30:15.380235] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:05:23.782 [2024-10-01 14:30:15.380375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:23.782 [2024-10-01 14:30:15.447813] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:23.782 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.042 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:24.042 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:24.042 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:05:24.042 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:05:24.042 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.042 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.042 [2024-10-01 14:30:15.479775] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:24.042 [2024-10-01 14:30:15.479803] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'cd044a1d-e56d-4f17-a338-f55423ab04cb' was resized: old size 131072, new size 204800 00:05:24.042 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.042 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:05:24.042 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.042 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.042 [2024-10-01 14:30:15.487734] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:24.043 [2024-10-01 14:30:15.487758] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '44ea94ef-2725-4f53-a95c-e293b375bf35' was resized: old size 131072, new size 204800 00:05:24.043 [2024-10-01 14:30:15.487782] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.043 [2024-10-01 14:30:15.563875] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.043 [2024-10-01 14:30:15.587612] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:05:24.043 [2024-10-01 14:30:15.587685] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:05:24.043 [2024-10-01 14:30:15.587696] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:24.043 [2024-10-01 14:30:15.587727] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:05:24.043 [2024-10-01 14:30:15.587823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:24.043 [2024-10-01 14:30:15.587859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:24.043 [2024-10-01 14:30:15.587871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.043 [2024-10-01 14:30:15.595577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:05:24.043 [2024-10-01 14:30:15.595632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.043 [2024-10-01 14:30:15.595650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:05:24.043 [2024-10-01 14:30:15.595661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.043 [2024-10-01 14:30:15.597817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.043 [2024-10-01 14:30:15.597850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:05:24.043 [2024-10-01 14:30:15.599405] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev cd044a1d-e56d-4f17-a338-f55423ab04cb 00:05:24.043 [2024-10-01 14:30:15.599463] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev cd044a1d-e56d-4f17-a338-f55423ab04cb is claimed 00:05:24.043 [2024-10-01 14:30:15.599563] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 44ea94ef-2725-4f53-a95c-e293b375bf35 00:05:24.043 [2024-10-01 14:30:15.599581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 44ea94ef-2725-4f53-a95c-e293b375bf35 is claimed 00:05:24.043 [2024-10-01 14:30:15.599687] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 44ea94ef-2725-4f53-a95c-e293b375bf35 (2) smaller than existing raid bdev Raid (3) 00:05:24.043 [2024-10-01 14:30:15.599718] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev cd044a1d-e56d-4f17-a338-f55423ab04cb: File exists 00:05:24.043 [2024-10-01 14:30:15.599756] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:05:24.043 [2024-10-01 14:30:15.599767] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:05:24.043 [2024-10-01 14:30:15.599998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:05:24.043 [2024-10-01 14:30:15.600140] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:05:24.043 [2024-10-01 14:30:15.600148] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:05:24.043 [2024-10-01 14:30:15.600321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:24.043 pt0 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:05:24.043 [2024-10-01 14:30:15.616047] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59010 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 59010 ']' 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 59010 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59010 00:05:24.043 killing process with pid 59010 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59010' 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 59010 00:05:24.043 [2024-10-01 14:30:15.665814] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:24.043 14:30:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 59010 00:05:24.043 [2024-10-01 14:30:15.665885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:24.043 [2024-10-01 14:30:15.665932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:24.043 [2024-10-01 14:30:15.665941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:05:24.984 [2024-10-01 14:30:16.560665] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:25.921 ************************************ 00:05:25.922 END TEST raid0_resize_superblock_test 00:05:25.922 ************************************ 00:05:25.922 14:30:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:05:25.922 00:05:25.922 real 0m3.356s 00:05:25.922 user 0m3.513s 00:05:25.922 sys 0m0.418s 00:05:25.922 14:30:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.922 14:30:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:25.922 14:30:17 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:05:25.922 14:30:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:25.922 14:30:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.922 14:30:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:25.922 ************************************ 00:05:25.922 START TEST raid1_resize_superblock_test 00:05:25.922 ************************************ 00:05:25.922 14:30:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:05:25.922 14:30:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:05:25.922 14:30:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59103 00:05:25.922 14:30:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59103' 00:05:25.922 Process raid pid: 59103 00:05:25.922 14:30:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:25.922 14:30:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59103 00:05:25.922 14:30:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 59103 ']' 00:05:25.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.922 14:30:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.922 14:30:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.922 14:30:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.922 14:30:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.922 14:30:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:25.922 [2024-10-01 14:30:17.494726] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:25.922 [2024-10-01 14:30:17.495009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:26.181 [2024-10-01 14:30:17.644162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.181 [2024-10-01 14:30:17.843141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.441 [2024-10-01 14:30:17.981466] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:26.441 [2024-10-01 14:30:17.981505] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:26.701 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.701 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:05:26.701 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:05:26.701 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.701 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.271 malloc0 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.271 [2024-10-01 14:30:18.717243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:05:27.271 [2024-10-01 14:30:18.717423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.271 [2024-10-01 14:30:18.717468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:05:27.271 [2024-10-01 14:30:18.717521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.271 [2024-10-01 14:30:18.719722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.271 [2024-10-01 14:30:18.719837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:05:27.271 pt0 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.271 fc6d11e7-a848-4c6e-8559-12e43396d0b5 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.271 efdbbe64-16e3-4fa8-b564-cab531c0f81c 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.271 aa392391-e712-4f8f-adde-ca95333820a6 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.271 [2024-10-01 14:30:18.805736] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev efdbbe64-16e3-4fa8-b564-cab531c0f81c is claimed 00:05:27.271 [2024-10-01 14:30:18.805814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev aa392391-e712-4f8f-adde-ca95333820a6 is claimed 00:05:27.271 [2024-10-01 14:30:18.805943] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:27.271 [2024-10-01 14:30:18.805958] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:05:27.271 [2024-10-01 14:30:18.806198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:27.271 [2024-10-01 14:30:18.806360] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:27.271 [2024-10-01 14:30:18.806369] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:05:27.271 [2024-10-01 14:30:18.806510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.271 [2024-10-01 14:30:18.885994] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.271 [2024-10-01 14:30:18.917936] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:27.271 [2024-10-01 14:30:18.918040] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'efdbbe64-16e3-4fa8-b564-cab531c0f81c' was resized: old size 131072, new size 204800 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.271 [2024-10-01 14:30:18.925888] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:27.271 [2024-10-01 14:30:18.925978] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'aa392391-e712-4f8f-adde-ca95333820a6' was resized: old size 131072, new size 204800 00:05:27.271 [2024-10-01 14:30:18.926055] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:05:27.271 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.534 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:05:27.534 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:05:27.534 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.534 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.534 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:05:27.534 14:30:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.534 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:05:27.534 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:27.534 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:27.534 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:27.534 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:05:27.534 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.534 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.534 [2024-10-01 14:30:19.006045] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:27.534 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.534 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:27.534 14:30:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:05:27.534 14:30:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:05:27.534 14:30:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:05:27.534 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.534 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.534 [2024-10-01 14:30:19.037785] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:05:27.534 [2024-10-01 14:30:19.037854] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:05:27.534 [2024-10-01 14:30:19.037885] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:05:27.534 [2024-10-01 14:30:19.038035] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:27.534 [2024-10-01 14:30:19.038205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:27.534 [2024-10-01 14:30:19.038272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:27.534 [2024-10-01 14:30:19.038286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:05:27.534 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.534 14:30:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:05:27.534 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.534 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.534 [2024-10-01 14:30:19.045744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:05:27.534 [2024-10-01 14:30:19.045877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.534 [2024-10-01 14:30:19.045913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:05:27.534 [2024-10-01 14:30:19.045963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.534 [2024-10-01 14:30:19.048119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.534 [2024-10-01 14:30:19.048227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:05:27.534 [2024-10-01 14:30:19.049842] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev efdbbe64-16e3-4fa8-b564-cab531c0f81c 00:05:27.534 [2024-10-01 14:30:19.049895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev efdbbe64-16e3-4fa8-b564-cab531c0f81c is claimed 00:05:27.534 [2024-10-01 14:30:19.049994] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev aa392391-e712-4f8f-adde-ca95333820a6 00:05:27.534 [2024-10-01 14:30:19.050011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev aa392391-e712-4f8f-adde-ca95333820a6 is claimed 00:05:27.534 [2024-10-01 14:30:19.050154] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev aa392391-e712-4f8f-adde-ca95333820a6 (2) smaller than existing raid bdev Raid (3) 00:05:27.534 [2024-10-01 14:30:19.050173] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev efdbbe64-16e3-4fa8-b564-cab531c0f81c: File exists 00:05:27.534 [2024-10-01 14:30:19.050211] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:05:27.534 [2024-10-01 14:30:19.050221] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:05:27.534 [2024-10-01 14:30:19.050457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:05:27.534 [2024-10-01 14:30:19.050594] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:05:27.534 [2024-10-01 14:30:19.050602] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:05:27.535 [2024-10-01 14:30:19.050756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:27.535 pt0 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:05:27.535 [2024-10-01 14:30:19.066178] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59103 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 59103 ']' 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 59103 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59103 00:05:27.535 killing process with pid 59103 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59103' 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 59103 00:05:27.535 [2024-10-01 14:30:19.120003] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:27.535 14:30:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 59103 00:05:27.535 [2024-10-01 14:30:19.120074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:27.535 [2024-10-01 14:30:19.120126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:27.535 [2024-10-01 14:30:19.120135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:05:28.481 [2024-10-01 14:30:20.019772] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:29.421 ************************************ 00:05:29.421 END TEST raid1_resize_superblock_test 00:05:29.421 ************************************ 00:05:29.421 14:30:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:05:29.421 00:05:29.421 real 0m3.425s 00:05:29.421 user 0m3.613s 00:05:29.421 sys 0m0.421s 00:05:29.421 14:30:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.421 14:30:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.421 14:30:20 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:05:29.421 14:30:20 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:05:29.421 14:30:20 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:05:29.421 14:30:20 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:05:29.421 14:30:20 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:05:29.421 14:30:20 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:05:29.421 14:30:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:29.421 14:30:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.421 14:30:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:29.421 ************************************ 00:05:29.421 START TEST raid_function_test_raid0 00:05:29.421 ************************************ 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:05:29.421 Process raid pid: 59189 00:05:29.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=59189 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59189' 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 59189 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 59189 ']' 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.421 14:30:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:05:29.421 [2024-10-01 14:30:20.991166] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:29.421 [2024-10-01 14:30:20.991435] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:29.681 [2024-10-01 14:30:21.139870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.681 [2024-10-01 14:30:21.330588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.940 [2024-10-01 14:30:21.467950] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:29.940 [2024-10-01 14:30:21.468082] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:05:30.514 Base_1 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:05:30.514 Base_2 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:05:30.514 [2024-10-01 14:30:21.959232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:05:30.514 [2024-10-01 14:30:21.961047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:05:30.514 [2024-10-01 14:30:21.961111] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:30.514 [2024-10-01 14:30:21.961123] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:05:30.514 [2024-10-01 14:30:21.961378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:30.514 [2024-10-01 14:30:21.961506] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:30.514 [2024-10-01 14:30:21.961514] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:05:30.514 [2024-10-01 14:30:21.961654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:05:30.514 14:30:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:05:30.514 [2024-10-01 14:30:22.183308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:05:30.775 /dev/nbd0 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:30.775 1+0 records in 00:05:30.775 1+0 records out 00:05:30.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029279 s, 14.0 MB/s 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:05:30.775 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.035 { 00:05:31.035 "nbd_device": "/dev/nbd0", 00:05:31.035 "bdev_name": "raid" 00:05:31.035 } 00:05:31.035 ]' 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.035 { 00:05:31.035 "nbd_device": "/dev/nbd0", 00:05:31.035 "bdev_name": "raid" 00:05:31.035 } 00:05:31.035 ]' 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:05:31.035 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:05:31.036 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:05:31.036 4096+0 records in 00:05:31.036 4096+0 records out 00:05:31.036 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0217864 s, 96.3 MB/s 00:05:31.036 14:30:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:05:31.603 4096+0 records in 00:05:31.603 4096+0 records out 00:05:31.603 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.709694 s, 3.0 MB/s 00:05:31.603 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:05:31.863 128+0 records in 00:05:31.863 128+0 records out 00:05:31.863 65536 bytes (66 kB, 64 KiB) copied, 0.000688473 s, 95.2 MB/s 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:05:31.863 2035+0 records in 00:05:31.863 2035+0 records out 00:05:31.863 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00744184 s, 140 MB/s 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:05:31.863 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:05:31.863 456+0 records in 00:05:31.864 456+0 records out 00:05:31.864 233472 bytes (233 kB, 228 KiB) copied, 0.00350933 s, 66.5 MB/s 00:05:31.864 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:05:31.864 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:05:31.864 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:31.864 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:05:31.864 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:31.864 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:05:31.864 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:05:31.864 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:05:31.864 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:31.864 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.864 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:05:31.864 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.864 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.125 [2024-10-01 14:30:23.586448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.125 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 59189 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 59189 ']' 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 59189 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59189 00:05:32.387 killing process with pid 59189 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59189' 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 59189 00:05:32.387 [2024-10-01 14:30:23.864755] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:32.387 14:30:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 59189 00:05:32.387 [2024-10-01 14:30:23.864847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:32.387 [2024-10-01 14:30:23.864892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:32.387 [2024-10-01 14:30:23.864903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:05:32.387 [2024-10-01 14:30:23.993084] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:33.328 14:30:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:05:33.328 00:05:33.328 real 0m3.890s 00:05:33.328 user 0m4.459s 00:05:33.328 sys 0m0.916s 00:05:33.328 14:30:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.328 ************************************ 00:05:33.328 END TEST raid_function_test_raid0 00:05:33.328 ************************************ 00:05:33.328 14:30:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:05:33.328 14:30:24 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:05:33.329 14:30:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:33.329 14:30:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.329 14:30:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:33.329 ************************************ 00:05:33.329 START TEST raid_function_test_concat 00:05:33.329 ************************************ 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:05:33.329 Process raid pid: 59318 00:05:33.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=59318 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59318' 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 59318 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 59318 ']' 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.329 14:30:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:05:33.329 [2024-10-01 14:30:24.956226] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:33.329 [2024-10-01 14:30:24.956352] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:33.589 [2024-10-01 14:30:25.107904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.859 [2024-10-01 14:30:25.304553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.859 [2024-10-01 14:30:25.444190] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:33.859 [2024-10-01 14:30:25.444223] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:34.118 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.118 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:05:34.118 14:30:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:05:34.118 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.118 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:05:34.378 Base_1 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:05:34.378 Base_2 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:05:34.378 [2024-10-01 14:30:25.877519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:05:34.378 [2024-10-01 14:30:25.879474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:05:34.378 [2024-10-01 14:30:25.879546] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:34.378 [2024-10-01 14:30:25.879559] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:05:34.378 [2024-10-01 14:30:25.879852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:34.378 [2024-10-01 14:30:25.879986] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:34.378 [2024-10-01 14:30:25.879995] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:05:34.378 [2024-10-01 14:30:25.880143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:05:34.378 14:30:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:05:34.638 [2024-10-01 14:30:26.097606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:05:34.638 /dev/nbd0 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:34.638 1+0 records in 00:05:34.638 1+0 records out 00:05:34.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043101 s, 9.5 MB/s 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:05:34.638 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.898 { 00:05:34.898 "nbd_device": "/dev/nbd0", 00:05:34.898 "bdev_name": "raid" 00:05:34.898 } 00:05:34.898 ]' 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.898 { 00:05:34.898 "nbd_device": "/dev/nbd0", 00:05:34.898 "bdev_name": "raid" 00:05:34.898 } 00:05:34.898 ]' 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:05:34.898 4096+0 records in 00:05:34.898 4096+0 records out 00:05:34.898 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0205347 s, 102 MB/s 00:05:34.898 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:05:35.469 4096+0 records in 00:05:35.469 4096+0 records out 00:05:35.469 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.372727 s, 5.6 MB/s 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:05:35.469 128+0 records in 00:05:35.469 128+0 records out 00:05:35.469 65536 bytes (66 kB, 64 KiB) copied, 0.000835369 s, 78.5 MB/s 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:05:35.469 2035+0 records in 00:05:35.469 2035+0 records out 00:05:35.469 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00585801 s, 178 MB/s 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:35.469 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:05:35.470 456+0 records in 00:05:35.470 456+0 records out 00:05:35.470 233472 bytes (233 kB, 228 KiB) copied, 0.0020307 s, 115 MB/s 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.470 14:30:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:05:35.731 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.731 [2024-10-01 14:30:27.191069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:35.731 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.731 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.731 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.731 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.731 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.731 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:05:35.731 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.731 14:30:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:05:35.731 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:05:35.731 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:05:35.991 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.991 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.991 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.991 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.991 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.991 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.991 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:05:35.991 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.991 14:30:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.991 14:30:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:05:35.991 14:30:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:05:35.991 14:30:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 59318 00:05:35.992 14:30:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 59318 ']' 00:05:35.992 14:30:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 59318 00:05:35.992 14:30:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:05:35.992 14:30:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.992 14:30:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59318 00:05:35.992 killing process with pid 59318 00:05:35.992 14:30:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.992 14:30:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.992 14:30:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59318' 00:05:35.992 14:30:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 59318 00:05:35.992 [2024-10-01 14:30:27.487207] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:35.992 14:30:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 59318 00:05:35.992 [2024-10-01 14:30:27.487329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:35.992 [2024-10-01 14:30:27.487386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:35.992 [2024-10-01 14:30:27.487400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:05:35.992 [2024-10-01 14:30:27.630292] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:36.931 ************************************ 00:05:36.931 END TEST raid_function_test_concat 00:05:36.931 ************************************ 00:05:36.931 14:30:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:05:36.931 00:05:36.931 real 0m3.580s 00:05:36.931 user 0m4.226s 00:05:36.931 sys 0m0.807s 00:05:36.931 14:30:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.931 14:30:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:05:36.931 14:30:28 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:05:36.931 14:30:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:36.931 14:30:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.931 14:30:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:36.931 ************************************ 00:05:36.931 START TEST raid0_resize_test 00:05:36.931 ************************************ 00:05:36.931 14:30:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:05:36.931 14:30:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:05:36.931 14:30:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:05:36.931 14:30:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:05:36.931 14:30:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:05:36.931 14:30:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:05:36.931 14:30:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:05:36.931 14:30:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:05:36.931 14:30:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:05:36.931 Process raid pid: 59440 00:05:36.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.931 14:30:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59440 00:05:36.931 14:30:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59440' 00:05:36.931 14:30:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59440 00:05:36.932 14:30:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 59440 ']' 00:05:36.932 14:30:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.932 14:30:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.932 14:30:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:36.932 14:30:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.932 14:30:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.932 14:30:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:36.932 [2024-10-01 14:30:28.598034] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:36.932 [2024-10-01 14:30:28.598338] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:37.192 [2024-10-01 14:30:28.761996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.455 [2024-10-01 14:30:28.986334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.717 [2024-10-01 14:30:29.140112] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:37.717 [2024-10-01 14:30:29.140158] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.979 Base_1 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.979 Base_2 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.979 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.980 [2024-10-01 14:30:29.486794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:05:37.980 [2024-10-01 14:30:29.488880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:05:37.980 [2024-10-01 14:30:29.489127] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:37.980 [2024-10-01 14:30:29.489155] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:05:37.980 [2024-10-01 14:30:29.489461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:05:37.980 [2024-10-01 14:30:29.489590] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:37.980 [2024-10-01 14:30:29.489602] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:05:37.980 [2024-10-01 14:30:29.489814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.980 [2024-10-01 14:30:29.494747] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:37.980 [2024-10-01 14:30:29.494779] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:05:37.980 true 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.980 [2024-10-01 14:30:29.506928] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.980 [2024-10-01 14:30:29.538749] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:37.980 [2024-10-01 14:30:29.538776] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:05:37.980 [2024-10-01 14:30:29.538815] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:05:37.980 true 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.980 [2024-10-01 14:30:29.550932] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59440 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 59440 ']' 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 59440 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59440 00:05:37.980 killing process with pid 59440 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59440' 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 59440 00:05:37.980 [2024-10-01 14:30:29.605200] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:37.980 14:30:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 59440 00:05:37.980 [2024-10-01 14:30:29.605297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:37.980 [2024-10-01 14:30:29.605353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:37.980 [2024-10-01 14:30:29.605363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:05:37.980 [2024-10-01 14:30:29.617739] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:38.925 ************************************ 00:05:38.925 END TEST raid0_resize_test 00:05:38.925 ************************************ 00:05:38.925 14:30:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:05:38.925 00:05:38.925 real 0m2.013s 00:05:38.925 user 0m2.102s 00:05:38.925 sys 0m0.326s 00:05:38.925 14:30:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.925 14:30:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.925 14:30:30 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:05:38.925 14:30:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:38.925 14:30:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.925 14:30:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:39.186 ************************************ 00:05:39.186 START TEST raid1_resize_test 00:05:39.186 ************************************ 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:05:39.186 Process raid pid: 59496 00:05:39.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59496 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59496' 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59496 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 59496 ']' 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.186 14:30:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.186 [2024-10-01 14:30:30.695532] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:39.186 [2024-10-01 14:30:30.696136] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:39.186 [2024-10-01 14:30:30.850805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.447 [2024-10-01 14:30:31.116211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.706 [2024-10-01 14:30:31.282985] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:39.706 [2024-10-01 14:30:31.283310] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.967 Base_1 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.967 Base_2 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.967 [2024-10-01 14:30:31.604396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:05:39.967 [2024-10-01 14:30:31.606528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:05:39.967 [2024-10-01 14:30:31.606608] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:39.967 [2024-10-01 14:30:31.606621] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:05:39.967 [2024-10-01 14:30:31.606971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:05:39.967 [2024-10-01 14:30:31.607113] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:39.967 [2024-10-01 14:30:31.607123] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:05:39.967 [2024-10-01 14:30:31.607288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.967 [2024-10-01 14:30:31.612333] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:39.967 [2024-10-01 14:30:31.612367] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:05:39.967 true 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.967 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:39.968 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.968 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.968 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:05:39.968 [2024-10-01 14:30:31.624526] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:39.968 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.229 [2024-10-01 14:30:31.656386] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:05:40.229 [2024-10-01 14:30:31.656424] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:05:40.229 [2024-10-01 14:30:31.656468] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:05:40.229 true 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.229 [2024-10-01 14:30:31.668567] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59496 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 59496 ']' 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 59496 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59496 00:05:40.229 killing process with pid 59496 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59496' 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 59496 00:05:40.229 14:30:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 59496 00:05:40.229 [2024-10-01 14:30:31.727580] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:40.229 [2024-10-01 14:30:31.727695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:40.229 [2024-10-01 14:30:31.728271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:40.229 [2024-10-01 14:30:31.728294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:05:40.229 [2024-10-01 14:30:31.740303] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:41.169 ************************************ 00:05:41.169 END TEST raid1_resize_test 00:05:41.169 ************************************ 00:05:41.169 14:30:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:05:41.169 00:05:41.169 real 0m2.050s 00:05:41.169 user 0m2.140s 00:05:41.169 sys 0m0.354s 00:05:41.169 14:30:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.169 14:30:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.169 14:30:32 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:05:41.169 14:30:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:05:41.169 14:30:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:05:41.169 14:30:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:41.169 14:30:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.169 14:30:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:41.169 ************************************ 00:05:41.169 START TEST raid_state_function_test 00:05:41.169 ************************************ 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:05:41.169 Process raid pid: 59555 00:05:41.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=59555 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59555' 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 59555 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 59555 ']' 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.169 14:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:41.169 [2024-10-01 14:30:32.834377] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:41.169 [2024-10-01 14:30:32.835073] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:41.430 [2024-10-01 14:30:32.991225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.690 [2024-10-01 14:30:33.261310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.951 [2024-10-01 14:30:33.437125] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:41.951 [2024-10-01 14:30:33.437528] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.211 [2024-10-01 14:30:33.715612] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:05:42.211 [2024-10-01 14:30:33.715889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:05:42.211 [2024-10-01 14:30:33.715923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:42.211 [2024-10-01 14:30:33.715943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:42.211 14:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.212 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:42.212 14:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.212 14:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.212 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:42.212 "name": "Existed_Raid", 00:05:42.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:42.212 "strip_size_kb": 64, 00:05:42.212 "state": "configuring", 00:05:42.212 "raid_level": "raid0", 00:05:42.212 "superblock": false, 00:05:42.212 "num_base_bdevs": 2, 00:05:42.212 "num_base_bdevs_discovered": 0, 00:05:42.212 "num_base_bdevs_operational": 2, 00:05:42.212 "base_bdevs_list": [ 00:05:42.212 { 00:05:42.212 "name": "BaseBdev1", 00:05:42.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:42.212 "is_configured": false, 00:05:42.212 "data_offset": 0, 00:05:42.212 "data_size": 0 00:05:42.212 }, 00:05:42.212 { 00:05:42.212 "name": "BaseBdev2", 00:05:42.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:42.212 "is_configured": false, 00:05:42.212 "data_offset": 0, 00:05:42.212 "data_size": 0 00:05:42.212 } 00:05:42.212 ] 00:05:42.212 }' 00:05:42.212 14:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:42.212 14:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.474 [2024-10-01 14:30:34.051591] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:05:42.474 [2024-10-01 14:30:34.051818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.474 [2024-10-01 14:30:34.063625] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:05:42.474 [2024-10-01 14:30:34.063690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:05:42.474 [2024-10-01 14:30:34.063701] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:42.474 [2024-10-01 14:30:34.063727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.474 [2024-10-01 14:30:34.118313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:42.474 BaseBdev1 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.474 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.474 [ 00:05:42.474 { 00:05:42.474 "name": "BaseBdev1", 00:05:42.474 "aliases": [ 00:05:42.474 "60f797e5-d225-475d-8491-8acae6feb4e5" 00:05:42.474 ], 00:05:42.474 "product_name": "Malloc disk", 00:05:42.474 "block_size": 512, 00:05:42.474 "num_blocks": 65536, 00:05:42.474 "uuid": "60f797e5-d225-475d-8491-8acae6feb4e5", 00:05:42.474 "assigned_rate_limits": { 00:05:42.474 "rw_ios_per_sec": 0, 00:05:42.474 "rw_mbytes_per_sec": 0, 00:05:42.474 "r_mbytes_per_sec": 0, 00:05:42.474 "w_mbytes_per_sec": 0 00:05:42.474 }, 00:05:42.474 "claimed": true, 00:05:42.474 "claim_type": "exclusive_write", 00:05:42.474 "zoned": false, 00:05:42.474 "supported_io_types": { 00:05:42.474 "read": true, 00:05:42.474 "write": true, 00:05:42.474 "unmap": true, 00:05:42.474 "flush": true, 00:05:42.474 "reset": true, 00:05:42.474 "nvme_admin": false, 00:05:42.474 "nvme_io": false, 00:05:42.474 "nvme_io_md": false, 00:05:42.474 "write_zeroes": true, 00:05:42.474 "zcopy": true, 00:05:42.474 "get_zone_info": false, 00:05:42.474 "zone_management": false, 00:05:42.474 "zone_append": false, 00:05:42.474 "compare": false, 00:05:42.474 "compare_and_write": false, 00:05:42.474 "abort": true, 00:05:42.475 "seek_hole": false, 00:05:42.475 "seek_data": false, 00:05:42.475 "copy": true, 00:05:42.475 "nvme_iov_md": false 00:05:42.475 }, 00:05:42.475 "memory_domains": [ 00:05:42.475 { 00:05:42.475 "dma_device_id": "system", 00:05:42.475 "dma_device_type": 1 00:05:42.475 }, 00:05:42.475 { 00:05:42.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.475 "dma_device_type": 2 00:05:42.475 } 00:05:42.475 ], 00:05:42.475 "driver_specific": {} 00:05:42.475 } 00:05:42.475 ] 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.475 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.741 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.741 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:42.741 "name": "Existed_Raid", 00:05:42.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:42.741 "strip_size_kb": 64, 00:05:42.741 "state": "configuring", 00:05:42.741 "raid_level": "raid0", 00:05:42.741 "superblock": false, 00:05:42.741 "num_base_bdevs": 2, 00:05:42.741 "num_base_bdevs_discovered": 1, 00:05:42.741 "num_base_bdevs_operational": 2, 00:05:42.741 "base_bdevs_list": [ 00:05:42.741 { 00:05:42.741 "name": "BaseBdev1", 00:05:42.741 "uuid": "60f797e5-d225-475d-8491-8acae6feb4e5", 00:05:42.741 "is_configured": true, 00:05:42.741 "data_offset": 0, 00:05:42.741 "data_size": 65536 00:05:42.741 }, 00:05:42.741 { 00:05:42.741 "name": "BaseBdev2", 00:05:42.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:42.741 "is_configured": false, 00:05:42.741 "data_offset": 0, 00:05:42.741 "data_size": 0 00:05:42.741 } 00:05:42.741 ] 00:05:42.741 }' 00:05:42.741 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:42.741 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.003 [2024-10-01 14:30:34.482475] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:05:43.003 [2024-10-01 14:30:34.482547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.003 [2024-10-01 14:30:34.494517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:43.003 [2024-10-01 14:30:34.496761] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:43.003 [2024-10-01 14:30:34.496866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.003 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:43.003 "name": "Existed_Raid", 00:05:43.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:43.003 "strip_size_kb": 64, 00:05:43.003 "state": "configuring", 00:05:43.003 "raid_level": "raid0", 00:05:43.003 "superblock": false, 00:05:43.003 "num_base_bdevs": 2, 00:05:43.003 "num_base_bdevs_discovered": 1, 00:05:43.004 "num_base_bdevs_operational": 2, 00:05:43.004 "base_bdevs_list": [ 00:05:43.004 { 00:05:43.004 "name": "BaseBdev1", 00:05:43.004 "uuid": "60f797e5-d225-475d-8491-8acae6feb4e5", 00:05:43.004 "is_configured": true, 00:05:43.004 "data_offset": 0, 00:05:43.004 "data_size": 65536 00:05:43.004 }, 00:05:43.004 { 00:05:43.004 "name": "BaseBdev2", 00:05:43.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:43.004 "is_configured": false, 00:05:43.004 "data_offset": 0, 00:05:43.004 "data_size": 0 00:05:43.004 } 00:05:43.004 ] 00:05:43.004 }' 00:05:43.004 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:43.004 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.265 [2024-10-01 14:30:34.914476] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:05:43.265 [2024-10-01 14:30:34.914545] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:05:43.265 [2024-10-01 14:30:34.914556] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:05:43.265 [2024-10-01 14:30:34.914982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:43.265 [2024-10-01 14:30:34.915156] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:05:43.265 [2024-10-01 14:30:34.915172] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:05:43.265 [2024-10-01 14:30:34.915464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:43.265 BaseBdev2 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.265 [ 00:05:43.265 { 00:05:43.265 "name": "BaseBdev2", 00:05:43.265 "aliases": [ 00:05:43.265 "1af575c6-a96b-4778-9b8a-ea73925ee93b" 00:05:43.265 ], 00:05:43.265 "product_name": "Malloc disk", 00:05:43.265 "block_size": 512, 00:05:43.265 "num_blocks": 65536, 00:05:43.265 "uuid": "1af575c6-a96b-4778-9b8a-ea73925ee93b", 00:05:43.265 "assigned_rate_limits": { 00:05:43.265 "rw_ios_per_sec": 0, 00:05:43.265 "rw_mbytes_per_sec": 0, 00:05:43.265 "r_mbytes_per_sec": 0, 00:05:43.265 "w_mbytes_per_sec": 0 00:05:43.265 }, 00:05:43.265 "claimed": true, 00:05:43.265 "claim_type": "exclusive_write", 00:05:43.265 "zoned": false, 00:05:43.265 "supported_io_types": { 00:05:43.265 "read": true, 00:05:43.265 "write": true, 00:05:43.265 "unmap": true, 00:05:43.265 "flush": true, 00:05:43.265 "reset": true, 00:05:43.265 "nvme_admin": false, 00:05:43.265 "nvme_io": false, 00:05:43.265 "nvme_io_md": false, 00:05:43.265 "write_zeroes": true, 00:05:43.265 "zcopy": true, 00:05:43.265 "get_zone_info": false, 00:05:43.265 "zone_management": false, 00:05:43.265 "zone_append": false, 00:05:43.265 "compare": false, 00:05:43.265 "compare_and_write": false, 00:05:43.265 "abort": true, 00:05:43.265 "seek_hole": false, 00:05:43.265 "seek_data": false, 00:05:43.265 "copy": true, 00:05:43.265 "nvme_iov_md": false 00:05:43.265 }, 00:05:43.265 "memory_domains": [ 00:05:43.265 { 00:05:43.265 "dma_device_id": "system", 00:05:43.265 "dma_device_type": 1 00:05:43.265 }, 00:05:43.265 { 00:05:43.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.265 "dma_device_type": 2 00:05:43.265 } 00:05:43.265 ], 00:05:43.265 "driver_specific": {} 00:05:43.265 } 00:05:43.265 ] 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:05:43.265 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:05:43.525 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:05:43.525 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:43.525 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:43.525 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:43.525 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:43.525 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:43.525 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:43.526 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:43.526 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:43.526 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:43.526 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:43.526 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.526 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:43.526 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.526 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.526 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:43.526 "name": "Existed_Raid", 00:05:43.526 "uuid": "6be53bfc-35d7-4dcf-87ff-26567aa6c634", 00:05:43.526 "strip_size_kb": 64, 00:05:43.526 "state": "online", 00:05:43.526 "raid_level": "raid0", 00:05:43.526 "superblock": false, 00:05:43.526 "num_base_bdevs": 2, 00:05:43.526 "num_base_bdevs_discovered": 2, 00:05:43.526 "num_base_bdevs_operational": 2, 00:05:43.526 "base_bdevs_list": [ 00:05:43.526 { 00:05:43.526 "name": "BaseBdev1", 00:05:43.526 "uuid": "60f797e5-d225-475d-8491-8acae6feb4e5", 00:05:43.526 "is_configured": true, 00:05:43.526 "data_offset": 0, 00:05:43.526 "data_size": 65536 00:05:43.526 }, 00:05:43.526 { 00:05:43.526 "name": "BaseBdev2", 00:05:43.526 "uuid": "1af575c6-a96b-4778-9b8a-ea73925ee93b", 00:05:43.526 "is_configured": true, 00:05:43.526 "data_offset": 0, 00:05:43.526 "data_size": 65536 00:05:43.526 } 00:05:43.526 ] 00:05:43.526 }' 00:05:43.526 14:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:43.526 14:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:05:43.787 [2024-10-01 14:30:35.314983] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:05:43.787 "name": "Existed_Raid", 00:05:43.787 "aliases": [ 00:05:43.787 "6be53bfc-35d7-4dcf-87ff-26567aa6c634" 00:05:43.787 ], 00:05:43.787 "product_name": "Raid Volume", 00:05:43.787 "block_size": 512, 00:05:43.787 "num_blocks": 131072, 00:05:43.787 "uuid": "6be53bfc-35d7-4dcf-87ff-26567aa6c634", 00:05:43.787 "assigned_rate_limits": { 00:05:43.787 "rw_ios_per_sec": 0, 00:05:43.787 "rw_mbytes_per_sec": 0, 00:05:43.787 "r_mbytes_per_sec": 0, 00:05:43.787 "w_mbytes_per_sec": 0 00:05:43.787 }, 00:05:43.787 "claimed": false, 00:05:43.787 "zoned": false, 00:05:43.787 "supported_io_types": { 00:05:43.787 "read": true, 00:05:43.787 "write": true, 00:05:43.787 "unmap": true, 00:05:43.787 "flush": true, 00:05:43.787 "reset": true, 00:05:43.787 "nvme_admin": false, 00:05:43.787 "nvme_io": false, 00:05:43.787 "nvme_io_md": false, 00:05:43.787 "write_zeroes": true, 00:05:43.787 "zcopy": false, 00:05:43.787 "get_zone_info": false, 00:05:43.787 "zone_management": false, 00:05:43.787 "zone_append": false, 00:05:43.787 "compare": false, 00:05:43.787 "compare_and_write": false, 00:05:43.787 "abort": false, 00:05:43.787 "seek_hole": false, 00:05:43.787 "seek_data": false, 00:05:43.787 "copy": false, 00:05:43.787 "nvme_iov_md": false 00:05:43.787 }, 00:05:43.787 "memory_domains": [ 00:05:43.787 { 00:05:43.787 "dma_device_id": "system", 00:05:43.787 "dma_device_type": 1 00:05:43.787 }, 00:05:43.787 { 00:05:43.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.787 "dma_device_type": 2 00:05:43.787 }, 00:05:43.787 { 00:05:43.787 "dma_device_id": "system", 00:05:43.787 "dma_device_type": 1 00:05:43.787 }, 00:05:43.787 { 00:05:43.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.787 "dma_device_type": 2 00:05:43.787 } 00:05:43.787 ], 00:05:43.787 "driver_specific": { 00:05:43.787 "raid": { 00:05:43.787 "uuid": "6be53bfc-35d7-4dcf-87ff-26567aa6c634", 00:05:43.787 "strip_size_kb": 64, 00:05:43.787 "state": "online", 00:05:43.787 "raid_level": "raid0", 00:05:43.787 "superblock": false, 00:05:43.787 "num_base_bdevs": 2, 00:05:43.787 "num_base_bdevs_discovered": 2, 00:05:43.787 "num_base_bdevs_operational": 2, 00:05:43.787 "base_bdevs_list": [ 00:05:43.787 { 00:05:43.787 "name": "BaseBdev1", 00:05:43.787 "uuid": "60f797e5-d225-475d-8491-8acae6feb4e5", 00:05:43.787 "is_configured": true, 00:05:43.787 "data_offset": 0, 00:05:43.787 "data_size": 65536 00:05:43.787 }, 00:05:43.787 { 00:05:43.787 "name": "BaseBdev2", 00:05:43.787 "uuid": "1af575c6-a96b-4778-9b8a-ea73925ee93b", 00:05:43.787 "is_configured": true, 00:05:43.787 "data_offset": 0, 00:05:43.787 "data_size": 65536 00:05:43.787 } 00:05:43.787 ] 00:05:43.787 } 00:05:43.787 } 00:05:43.787 }' 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:05:43.787 BaseBdev2' 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:43.787 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:44.047 [2024-10-01 14:30:35.494768] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:05:44.047 [2024-10-01 14:30:35.494811] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:44.047 [2024-10-01 14:30:35.494872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:44.047 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:44.048 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:44.048 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:44.048 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:44.048 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.048 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:44.048 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:44.048 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.048 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:44.048 "name": "Existed_Raid", 00:05:44.048 "uuid": "6be53bfc-35d7-4dcf-87ff-26567aa6c634", 00:05:44.048 "strip_size_kb": 64, 00:05:44.048 "state": "offline", 00:05:44.048 "raid_level": "raid0", 00:05:44.048 "superblock": false, 00:05:44.048 "num_base_bdevs": 2, 00:05:44.048 "num_base_bdevs_discovered": 1, 00:05:44.048 "num_base_bdevs_operational": 1, 00:05:44.048 "base_bdevs_list": [ 00:05:44.048 { 00:05:44.048 "name": null, 00:05:44.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:44.048 "is_configured": false, 00:05:44.048 "data_offset": 0, 00:05:44.048 "data_size": 65536 00:05:44.048 }, 00:05:44.048 { 00:05:44.048 "name": "BaseBdev2", 00:05:44.048 "uuid": "1af575c6-a96b-4778-9b8a-ea73925ee93b", 00:05:44.048 "is_configured": true, 00:05:44.048 "data_offset": 0, 00:05:44.048 "data_size": 65536 00:05:44.048 } 00:05:44.048 ] 00:05:44.048 }' 00:05:44.048 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:44.048 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:44.308 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:05:44.308 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:05:44.308 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:44.308 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.308 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:44.308 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:05:44.308 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.308 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:05:44.308 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:05:44.308 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:05:44.308 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.308 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:44.308 [2024-10-01 14:30:35.929283] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:05:44.308 [2024-10-01 14:30:35.929351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:05:44.569 14:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.569 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:05:44.569 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:05:44.569 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:44.569 14:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.569 14:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:44.570 14:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 59555 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 59555 ']' 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 59555 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59555 00:05:44.570 killing process with pid 59555 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59555' 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 59555 00:05:44.570 [2024-10-01 14:30:36.065699] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:44.570 14:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 59555 00:05:44.570 [2024-10-01 14:30:36.077630] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:05:45.514 00:05:45.514 real 0m4.260s 00:05:45.514 user 0m5.908s 00:05:45.514 sys 0m0.748s 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.514 ************************************ 00:05:45.514 END TEST raid_state_function_test 00:05:45.514 ************************************ 00:05:45.514 14:30:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:05:45.514 14:30:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:45.514 14:30:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.514 14:30:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:45.514 ************************************ 00:05:45.514 START TEST raid_state_function_test_sb 00:05:45.514 ************************************ 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:05:45.514 Process raid pid: 59799 00:05:45.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=59799 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59799' 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 59799 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 59799 ']' 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:45.514 14:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:05:45.514 [2024-10-01 14:30:37.160224] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:45.514 [2024-10-01 14:30:37.160376] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:45.775 [2024-10-01 14:30:37.312091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.036 [2024-10-01 14:30:37.577863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.297 [2024-10-01 14:30:37.740860] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:46.297 [2024-10-01 14:30:37.741086] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:46.558 [2024-10-01 14:30:38.049633] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:05:46.558 [2024-10-01 14:30:38.049726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:05:46.558 [2024-10-01 14:30:38.049739] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:46.558 [2024-10-01 14:30:38.049750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:46.558 "name": "Existed_Raid", 00:05:46.558 "uuid": "e469d732-5c7f-409d-a887-a871a2b1a22a", 00:05:46.558 "strip_size_kb": 64, 00:05:46.558 "state": "configuring", 00:05:46.558 "raid_level": "raid0", 00:05:46.558 "superblock": true, 00:05:46.558 "num_base_bdevs": 2, 00:05:46.558 "num_base_bdevs_discovered": 0, 00:05:46.558 "num_base_bdevs_operational": 2, 00:05:46.558 "base_bdevs_list": [ 00:05:46.558 { 00:05:46.558 "name": "BaseBdev1", 00:05:46.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:46.558 "is_configured": false, 00:05:46.558 "data_offset": 0, 00:05:46.558 "data_size": 0 00:05:46.558 }, 00:05:46.558 { 00:05:46.558 "name": "BaseBdev2", 00:05:46.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:46.558 "is_configured": false, 00:05:46.558 "data_offset": 0, 00:05:46.558 "data_size": 0 00:05:46.558 } 00:05:46.558 ] 00:05:46.558 }' 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:46.558 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:46.818 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:05:46.818 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.818 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:46.818 [2024-10-01 14:30:38.397595] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:05:46.818 [2024-10-01 14:30:38.397656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:05:46.818 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.818 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:46.818 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.818 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:46.818 [2024-10-01 14:30:38.409663] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:05:46.818 [2024-10-01 14:30:38.409860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:05:46.818 [2024-10-01 14:30:38.409934] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:46.818 [2024-10-01 14:30:38.409967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:46.818 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.818 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:05:46.818 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.818 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:46.819 [2024-10-01 14:30:38.468794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:46.819 BaseBdev1 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:46.819 [ 00:05:46.819 { 00:05:46.819 "name": "BaseBdev1", 00:05:46.819 "aliases": [ 00:05:46.819 "f133541d-694b-473c-b781-7920f0f44fe9" 00:05:46.819 ], 00:05:46.819 "product_name": "Malloc disk", 00:05:46.819 "block_size": 512, 00:05:46.819 "num_blocks": 65536, 00:05:46.819 "uuid": "f133541d-694b-473c-b781-7920f0f44fe9", 00:05:46.819 "assigned_rate_limits": { 00:05:46.819 "rw_ios_per_sec": 0, 00:05:46.819 "rw_mbytes_per_sec": 0, 00:05:46.819 "r_mbytes_per_sec": 0, 00:05:46.819 "w_mbytes_per_sec": 0 00:05:46.819 }, 00:05:46.819 "claimed": true, 00:05:46.819 "claim_type": "exclusive_write", 00:05:46.819 "zoned": false, 00:05:46.819 "supported_io_types": { 00:05:46.819 "read": true, 00:05:46.819 "write": true, 00:05:46.819 "unmap": true, 00:05:46.819 "flush": true, 00:05:46.819 "reset": true, 00:05:46.819 "nvme_admin": false, 00:05:46.819 "nvme_io": false, 00:05:46.819 "nvme_io_md": false, 00:05:46.819 "write_zeroes": true, 00:05:46.819 "zcopy": true, 00:05:46.819 "get_zone_info": false, 00:05:46.819 "zone_management": false, 00:05:46.819 "zone_append": false, 00:05:46.819 "compare": false, 00:05:46.819 "compare_and_write": false, 00:05:46.819 "abort": true, 00:05:46.819 "seek_hole": false, 00:05:46.819 "seek_data": false, 00:05:46.819 "copy": true, 00:05:46.819 "nvme_iov_md": false 00:05:46.819 }, 00:05:46.819 "memory_domains": [ 00:05:46.819 { 00:05:46.819 "dma_device_id": "system", 00:05:46.819 "dma_device_type": 1 00:05:46.819 }, 00:05:46.819 { 00:05:46.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.819 "dma_device_type": 2 00:05:46.819 } 00:05:46.819 ], 00:05:46.819 "driver_specific": {} 00:05:46.819 } 00:05:46.819 ] 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:46.819 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:47.098 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:47.098 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.098 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:47.098 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:47.098 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.098 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:47.098 "name": "Existed_Raid", 00:05:47.098 "uuid": "7358880d-a61a-40d1-a31a-1de9aa396dda", 00:05:47.098 "strip_size_kb": 64, 00:05:47.098 "state": "configuring", 00:05:47.098 "raid_level": "raid0", 00:05:47.098 "superblock": true, 00:05:47.098 "num_base_bdevs": 2, 00:05:47.098 "num_base_bdevs_discovered": 1, 00:05:47.098 "num_base_bdevs_operational": 2, 00:05:47.098 "base_bdevs_list": [ 00:05:47.098 { 00:05:47.098 "name": "BaseBdev1", 00:05:47.098 "uuid": "f133541d-694b-473c-b781-7920f0f44fe9", 00:05:47.098 "is_configured": true, 00:05:47.098 "data_offset": 2048, 00:05:47.098 "data_size": 63488 00:05:47.098 }, 00:05:47.098 { 00:05:47.098 "name": "BaseBdev2", 00:05:47.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:47.098 "is_configured": false, 00:05:47.098 "data_offset": 0, 00:05:47.098 "data_size": 0 00:05:47.098 } 00:05:47.098 ] 00:05:47.098 }' 00:05:47.098 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:47.098 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:47.395 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:05:47.395 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.395 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:47.395 [2024-10-01 14:30:38.852957] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:05:47.395 [2024-10-01 14:30:38.853028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:05:47.395 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.395 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:05:47.395 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.395 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:47.395 [2024-10-01 14:30:38.865126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:47.395 [2024-10-01 14:30:38.867444] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:05:47.395 [2024-10-01 14:30:38.867637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:05:47.395 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.395 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:05:47.395 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:47.396 "name": "Existed_Raid", 00:05:47.396 "uuid": "177d1be4-3c4a-4127-ad86-bbd295d0d390", 00:05:47.396 "strip_size_kb": 64, 00:05:47.396 "state": "configuring", 00:05:47.396 "raid_level": "raid0", 00:05:47.396 "superblock": true, 00:05:47.396 "num_base_bdevs": 2, 00:05:47.396 "num_base_bdevs_discovered": 1, 00:05:47.396 "num_base_bdevs_operational": 2, 00:05:47.396 "base_bdevs_list": [ 00:05:47.396 { 00:05:47.396 "name": "BaseBdev1", 00:05:47.396 "uuid": "f133541d-694b-473c-b781-7920f0f44fe9", 00:05:47.396 "is_configured": true, 00:05:47.396 "data_offset": 2048, 00:05:47.396 "data_size": 63488 00:05:47.396 }, 00:05:47.396 { 00:05:47.396 "name": "BaseBdev2", 00:05:47.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:47.396 "is_configured": false, 00:05:47.396 "data_offset": 0, 00:05:47.396 "data_size": 0 00:05:47.396 } 00:05:47.396 ] 00:05:47.396 }' 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:47.396 14:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:47.657 [2024-10-01 14:30:39.236916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:05:47.657 [2024-10-01 14:30:39.237444] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:05:47.657 [2024-10-01 14:30:39.237469] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:05:47.657 [2024-10-01 14:30:39.237910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:47.657 [2024-10-01 14:30:39.238073] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:05:47.657 [2024-10-01 14:30:39.238086] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:05:47.657 BaseBdev2 00:05:47.657 [2024-10-01 14:30:39.238242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:47.657 [ 00:05:47.657 { 00:05:47.657 "name": "BaseBdev2", 00:05:47.657 "aliases": [ 00:05:47.657 "781e04b5-d369-4231-b44b-e91fba540f7c" 00:05:47.657 ], 00:05:47.657 "product_name": "Malloc disk", 00:05:47.657 "block_size": 512, 00:05:47.657 "num_blocks": 65536, 00:05:47.657 "uuid": "781e04b5-d369-4231-b44b-e91fba540f7c", 00:05:47.657 "assigned_rate_limits": { 00:05:47.657 "rw_ios_per_sec": 0, 00:05:47.657 "rw_mbytes_per_sec": 0, 00:05:47.657 "r_mbytes_per_sec": 0, 00:05:47.657 "w_mbytes_per_sec": 0 00:05:47.657 }, 00:05:47.657 "claimed": true, 00:05:47.657 "claim_type": "exclusive_write", 00:05:47.657 "zoned": false, 00:05:47.657 "supported_io_types": { 00:05:47.657 "read": true, 00:05:47.657 "write": true, 00:05:47.657 "unmap": true, 00:05:47.657 "flush": true, 00:05:47.657 "reset": true, 00:05:47.657 "nvme_admin": false, 00:05:47.657 "nvme_io": false, 00:05:47.657 "nvme_io_md": false, 00:05:47.657 "write_zeroes": true, 00:05:47.657 "zcopy": true, 00:05:47.657 "get_zone_info": false, 00:05:47.657 "zone_management": false, 00:05:47.657 "zone_append": false, 00:05:47.657 "compare": false, 00:05:47.657 "compare_and_write": false, 00:05:47.657 "abort": true, 00:05:47.657 "seek_hole": false, 00:05:47.657 "seek_data": false, 00:05:47.657 "copy": true, 00:05:47.657 "nvme_iov_md": false 00:05:47.657 }, 00:05:47.657 "memory_domains": [ 00:05:47.657 { 00:05:47.657 "dma_device_id": "system", 00:05:47.657 "dma_device_type": 1 00:05:47.657 }, 00:05:47.657 { 00:05:47.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.657 "dma_device_type": 2 00:05:47.657 } 00:05:47.657 ], 00:05:47.657 "driver_specific": {} 00:05:47.657 } 00:05:47.657 ] 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:47.657 "name": "Existed_Raid", 00:05:47.657 "uuid": "177d1be4-3c4a-4127-ad86-bbd295d0d390", 00:05:47.657 "strip_size_kb": 64, 00:05:47.657 "state": "online", 00:05:47.657 "raid_level": "raid0", 00:05:47.657 "superblock": true, 00:05:47.657 "num_base_bdevs": 2, 00:05:47.657 "num_base_bdevs_discovered": 2, 00:05:47.657 "num_base_bdevs_operational": 2, 00:05:47.657 "base_bdevs_list": [ 00:05:47.657 { 00:05:47.657 "name": "BaseBdev1", 00:05:47.657 "uuid": "f133541d-694b-473c-b781-7920f0f44fe9", 00:05:47.657 "is_configured": true, 00:05:47.657 "data_offset": 2048, 00:05:47.657 "data_size": 63488 00:05:47.657 }, 00:05:47.657 { 00:05:47.657 "name": "BaseBdev2", 00:05:47.657 "uuid": "781e04b5-d369-4231-b44b-e91fba540f7c", 00:05:47.657 "is_configured": true, 00:05:47.657 "data_offset": 2048, 00:05:47.657 "data_size": 63488 00:05:47.657 } 00:05:47.657 ] 00:05:47.657 }' 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:47.657 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:47.918 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:05:47.918 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:05:47.918 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:05:47.918 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:05:47.918 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:05:47.918 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:05:47.918 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:05:47.918 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:05:47.918 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.918 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:48.180 [2024-10-01 14:30:39.601390] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:05:48.180 "name": "Existed_Raid", 00:05:48.180 "aliases": [ 00:05:48.180 "177d1be4-3c4a-4127-ad86-bbd295d0d390" 00:05:48.180 ], 00:05:48.180 "product_name": "Raid Volume", 00:05:48.180 "block_size": 512, 00:05:48.180 "num_blocks": 126976, 00:05:48.180 "uuid": "177d1be4-3c4a-4127-ad86-bbd295d0d390", 00:05:48.180 "assigned_rate_limits": { 00:05:48.180 "rw_ios_per_sec": 0, 00:05:48.180 "rw_mbytes_per_sec": 0, 00:05:48.180 "r_mbytes_per_sec": 0, 00:05:48.180 "w_mbytes_per_sec": 0 00:05:48.180 }, 00:05:48.180 "claimed": false, 00:05:48.180 "zoned": false, 00:05:48.180 "supported_io_types": { 00:05:48.180 "read": true, 00:05:48.180 "write": true, 00:05:48.180 "unmap": true, 00:05:48.180 "flush": true, 00:05:48.180 "reset": true, 00:05:48.180 "nvme_admin": false, 00:05:48.180 "nvme_io": false, 00:05:48.180 "nvme_io_md": false, 00:05:48.180 "write_zeroes": true, 00:05:48.180 "zcopy": false, 00:05:48.180 "get_zone_info": false, 00:05:48.180 "zone_management": false, 00:05:48.180 "zone_append": false, 00:05:48.180 "compare": false, 00:05:48.180 "compare_and_write": false, 00:05:48.180 "abort": false, 00:05:48.180 "seek_hole": false, 00:05:48.180 "seek_data": false, 00:05:48.180 "copy": false, 00:05:48.180 "nvme_iov_md": false 00:05:48.180 }, 00:05:48.180 "memory_domains": [ 00:05:48.180 { 00:05:48.180 "dma_device_id": "system", 00:05:48.180 "dma_device_type": 1 00:05:48.180 }, 00:05:48.180 { 00:05:48.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.180 "dma_device_type": 2 00:05:48.180 }, 00:05:48.180 { 00:05:48.180 "dma_device_id": "system", 00:05:48.180 "dma_device_type": 1 00:05:48.180 }, 00:05:48.180 { 00:05:48.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.180 "dma_device_type": 2 00:05:48.180 } 00:05:48.180 ], 00:05:48.180 "driver_specific": { 00:05:48.180 "raid": { 00:05:48.180 "uuid": "177d1be4-3c4a-4127-ad86-bbd295d0d390", 00:05:48.180 "strip_size_kb": 64, 00:05:48.180 "state": "online", 00:05:48.180 "raid_level": "raid0", 00:05:48.180 "superblock": true, 00:05:48.180 "num_base_bdevs": 2, 00:05:48.180 "num_base_bdevs_discovered": 2, 00:05:48.180 "num_base_bdevs_operational": 2, 00:05:48.180 "base_bdevs_list": [ 00:05:48.180 { 00:05:48.180 "name": "BaseBdev1", 00:05:48.180 "uuid": "f133541d-694b-473c-b781-7920f0f44fe9", 00:05:48.180 "is_configured": true, 00:05:48.180 "data_offset": 2048, 00:05:48.180 "data_size": 63488 00:05:48.180 }, 00:05:48.180 { 00:05:48.180 "name": "BaseBdev2", 00:05:48.180 "uuid": "781e04b5-d369-4231-b44b-e91fba540f7c", 00:05:48.180 "is_configured": true, 00:05:48.180 "data_offset": 2048, 00:05:48.180 "data_size": 63488 00:05:48.180 } 00:05:48.180 ] 00:05:48.180 } 00:05:48.180 } 00:05:48.180 }' 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:05:48.180 BaseBdev2' 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:48.180 [2024-10-01 14:30:39.781181] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:05:48.180 [2024-10-01 14:30:39.781226] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:48.180 [2024-10-01 14:30:39.781291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:05:48.180 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:05:48.181 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:05:48.181 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:05:48.181 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:48.181 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:48.181 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:05:48.181 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:48.181 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:48.181 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:48.181 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:48.181 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:48.181 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:05:48.181 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.181 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:48.442 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.442 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:48.442 "name": "Existed_Raid", 00:05:48.442 "uuid": "177d1be4-3c4a-4127-ad86-bbd295d0d390", 00:05:48.442 "strip_size_kb": 64, 00:05:48.442 "state": "offline", 00:05:48.442 "raid_level": "raid0", 00:05:48.442 "superblock": true, 00:05:48.442 "num_base_bdevs": 2, 00:05:48.442 "num_base_bdevs_discovered": 1, 00:05:48.442 "num_base_bdevs_operational": 1, 00:05:48.442 "base_bdevs_list": [ 00:05:48.442 { 00:05:48.442 "name": null, 00:05:48.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:05:48.442 "is_configured": false, 00:05:48.442 "data_offset": 0, 00:05:48.442 "data_size": 63488 00:05:48.442 }, 00:05:48.442 { 00:05:48.442 "name": "BaseBdev2", 00:05:48.442 "uuid": "781e04b5-d369-4231-b44b-e91fba540f7c", 00:05:48.442 "is_configured": true, 00:05:48.442 "data_offset": 2048, 00:05:48.442 "data_size": 63488 00:05:48.442 } 00:05:48.442 ] 00:05:48.442 }' 00:05:48.442 14:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:48.442 14:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:48.703 [2024-10-01 14:30:40.264336] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:05:48.703 [2024-10-01 14:30:40.264573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 59799 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 59799 ']' 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 59799 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.703 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59799 00:05:49.014 killing process with pid 59799 00:05:49.014 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.014 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.014 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59799' 00:05:49.014 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 59799 00:05:49.014 14:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 59799 00:05:49.014 [2024-10-01 14:30:40.402624] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:49.014 [2024-10-01 14:30:40.414362] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:49.966 14:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:05:49.966 00:05:49.966 real 0m4.261s 00:05:49.966 user 0m5.949s 00:05:49.966 sys 0m0.752s 00:05:49.966 14:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.966 14:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:05:49.966 ************************************ 00:05:49.966 END TEST raid_state_function_test_sb 00:05:49.966 ************************************ 00:05:49.966 14:30:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:05:49.966 14:30:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:49.966 14:30:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.966 14:30:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:49.966 ************************************ 00:05:49.966 START TEST raid_superblock_test 00:05:49.966 ************************************ 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:05:49.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60045 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60045 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60045 ']' 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.966 14:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:05:49.966 [2024-10-01 14:30:41.499828] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:49.966 [2024-10-01 14:30:41.499977] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60045 ] 00:05:50.228 [2024-10-01 14:30:41.657226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.489 [2024-10-01 14:30:41.916462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.489 [2024-10-01 14:30:42.079158] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:50.489 [2024-10-01 14:30:42.079458] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.749 malloc1 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.749 [2024-10-01 14:30:42.410584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:05:50.749 [2024-10-01 14:30:42.410666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:50.749 [2024-10-01 14:30:42.410690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:05:50.749 [2024-10-01 14:30:42.410723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:50.749 [2024-10-01 14:30:42.413267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:50.749 [2024-10-01 14:30:42.413321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:05:50.749 pt1 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.749 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.015 malloc2 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.015 [2024-10-01 14:30:42.467420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:05:51.015 [2024-10-01 14:30:42.467502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.015 [2024-10-01 14:30:42.467530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:51.015 [2024-10-01 14:30:42.467540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.015 [2024-10-01 14:30:42.470082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.015 [2024-10-01 14:30:42.470133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:05:51.015 pt2 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.015 [2024-10-01 14:30:42.475493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:05:51.015 [2024-10-01 14:30:42.477875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:05:51.015 [2024-10-01 14:30:42.478060] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:05:51.015 [2024-10-01 14:30:42.478074] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:05:51.015 [2024-10-01 14:30:42.478385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:51.015 [2024-10-01 14:30:42.478536] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:05:51.015 [2024-10-01 14:30:42.478548] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:05:51.015 [2024-10-01 14:30:42.478900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:51.015 "name": "raid_bdev1", 00:05:51.015 "uuid": "3c384abb-2876-4051-820c-45604498e9d5", 00:05:51.015 "strip_size_kb": 64, 00:05:51.015 "state": "online", 00:05:51.015 "raid_level": "raid0", 00:05:51.015 "superblock": true, 00:05:51.015 "num_base_bdevs": 2, 00:05:51.015 "num_base_bdevs_discovered": 2, 00:05:51.015 "num_base_bdevs_operational": 2, 00:05:51.015 "base_bdevs_list": [ 00:05:51.015 { 00:05:51.015 "name": "pt1", 00:05:51.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:51.015 "is_configured": true, 00:05:51.015 "data_offset": 2048, 00:05:51.015 "data_size": 63488 00:05:51.015 }, 00:05:51.015 { 00:05:51.015 "name": "pt2", 00:05:51.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:51.015 "is_configured": true, 00:05:51.015 "data_offset": 2048, 00:05:51.015 "data_size": 63488 00:05:51.015 } 00:05:51.015 ] 00:05:51.015 }' 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:51.015 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:05:51.290 [2024-10-01 14:30:42.803899] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:05:51.290 "name": "raid_bdev1", 00:05:51.290 "aliases": [ 00:05:51.290 "3c384abb-2876-4051-820c-45604498e9d5" 00:05:51.290 ], 00:05:51.290 "product_name": "Raid Volume", 00:05:51.290 "block_size": 512, 00:05:51.290 "num_blocks": 126976, 00:05:51.290 "uuid": "3c384abb-2876-4051-820c-45604498e9d5", 00:05:51.290 "assigned_rate_limits": { 00:05:51.290 "rw_ios_per_sec": 0, 00:05:51.290 "rw_mbytes_per_sec": 0, 00:05:51.290 "r_mbytes_per_sec": 0, 00:05:51.290 "w_mbytes_per_sec": 0 00:05:51.290 }, 00:05:51.290 "claimed": false, 00:05:51.290 "zoned": false, 00:05:51.290 "supported_io_types": { 00:05:51.290 "read": true, 00:05:51.290 "write": true, 00:05:51.290 "unmap": true, 00:05:51.290 "flush": true, 00:05:51.290 "reset": true, 00:05:51.290 "nvme_admin": false, 00:05:51.290 "nvme_io": false, 00:05:51.290 "nvme_io_md": false, 00:05:51.290 "write_zeroes": true, 00:05:51.290 "zcopy": false, 00:05:51.290 "get_zone_info": false, 00:05:51.290 "zone_management": false, 00:05:51.290 "zone_append": false, 00:05:51.290 "compare": false, 00:05:51.290 "compare_and_write": false, 00:05:51.290 "abort": false, 00:05:51.290 "seek_hole": false, 00:05:51.290 "seek_data": false, 00:05:51.290 "copy": false, 00:05:51.290 "nvme_iov_md": false 00:05:51.290 }, 00:05:51.290 "memory_domains": [ 00:05:51.290 { 00:05:51.290 "dma_device_id": "system", 00:05:51.290 "dma_device_type": 1 00:05:51.290 }, 00:05:51.290 { 00:05:51.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.290 "dma_device_type": 2 00:05:51.290 }, 00:05:51.290 { 00:05:51.290 "dma_device_id": "system", 00:05:51.290 "dma_device_type": 1 00:05:51.290 }, 00:05:51.290 { 00:05:51.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.290 "dma_device_type": 2 00:05:51.290 } 00:05:51.290 ], 00:05:51.290 "driver_specific": { 00:05:51.290 "raid": { 00:05:51.290 "uuid": "3c384abb-2876-4051-820c-45604498e9d5", 00:05:51.290 "strip_size_kb": 64, 00:05:51.290 "state": "online", 00:05:51.290 "raid_level": "raid0", 00:05:51.290 "superblock": true, 00:05:51.290 "num_base_bdevs": 2, 00:05:51.290 "num_base_bdevs_discovered": 2, 00:05:51.290 "num_base_bdevs_operational": 2, 00:05:51.290 "base_bdevs_list": [ 00:05:51.290 { 00:05:51.290 "name": "pt1", 00:05:51.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:51.290 "is_configured": true, 00:05:51.290 "data_offset": 2048, 00:05:51.290 "data_size": 63488 00:05:51.290 }, 00:05:51.290 { 00:05:51.290 "name": "pt2", 00:05:51.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:51.290 "is_configured": true, 00:05:51.290 "data_offset": 2048, 00:05:51.290 "data_size": 63488 00:05:51.290 } 00:05:51.290 ] 00:05:51.290 } 00:05:51.290 } 00:05:51.290 }' 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:05:51.290 pt2' 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.290 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.290 [2024-10-01 14:30:42.963881] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:51.552 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.552 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3c384abb-2876-4051-820c-45604498e9d5 00:05:51.552 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3c384abb-2876-4051-820c-45604498e9d5 ']' 00:05:51.552 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:05:51.552 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.552 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.552 [2024-10-01 14:30:42.983565] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:05:51.552 [2024-10-01 14:30:42.983600] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:51.552 [2024-10-01 14:30:42.983696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:51.553 [2024-10-01 14:30:42.983770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:51.553 [2024-10-01 14:30:42.983784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:05:51.553 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.553 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:51.553 14:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:05:51.553 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.553 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.553 14:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.553 [2024-10-01 14:30:43.083595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:05:51.553 [2024-10-01 14:30:43.085892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:05:51.553 [2024-10-01 14:30:43.085981] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:05:51.553 [2024-10-01 14:30:43.086048] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:05:51.553 [2024-10-01 14:30:43.086064] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:05:51.553 [2024-10-01 14:30:43.086076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:05:51.553 request: 00:05:51.553 { 00:05:51.553 "name": "raid_bdev1", 00:05:51.553 "raid_level": "raid0", 00:05:51.553 "base_bdevs": [ 00:05:51.553 "malloc1", 00:05:51.553 "malloc2" 00:05:51.553 ], 00:05:51.553 "strip_size_kb": 64, 00:05:51.553 "superblock": false, 00:05:51.553 "method": "bdev_raid_create", 00:05:51.553 "req_id": 1 00:05:51.553 } 00:05:51.553 Got JSON-RPC error response 00:05:51.553 response: 00:05:51.553 { 00:05:51.553 "code": -17, 00:05:51.553 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:05:51.553 } 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.553 [2024-10-01 14:30:43.127587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:05:51.553 [2024-10-01 14:30:43.127840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.553 [2024-10-01 14:30:43.127873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:51.553 [2024-10-01 14:30:43.127886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.553 [2024-10-01 14:30:43.130484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.553 [2024-10-01 14:30:43.130543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:05:51.553 [2024-10-01 14:30:43.130647] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:05:51.553 [2024-10-01 14:30:43.130731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:05:51.553 pt1 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:51.553 "name": "raid_bdev1", 00:05:51.553 "uuid": "3c384abb-2876-4051-820c-45604498e9d5", 00:05:51.553 "strip_size_kb": 64, 00:05:51.553 "state": "configuring", 00:05:51.553 "raid_level": "raid0", 00:05:51.553 "superblock": true, 00:05:51.553 "num_base_bdevs": 2, 00:05:51.553 "num_base_bdevs_discovered": 1, 00:05:51.553 "num_base_bdevs_operational": 2, 00:05:51.553 "base_bdevs_list": [ 00:05:51.553 { 00:05:51.553 "name": "pt1", 00:05:51.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:51.553 "is_configured": true, 00:05:51.553 "data_offset": 2048, 00:05:51.553 "data_size": 63488 00:05:51.553 }, 00:05:51.553 { 00:05:51.553 "name": null, 00:05:51.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:51.553 "is_configured": false, 00:05:51.553 "data_offset": 2048, 00:05:51.553 "data_size": 63488 00:05:51.553 } 00:05:51.553 ] 00:05:51.553 }' 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:51.553 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.815 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:05:51.815 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:05:51.815 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:05:51.815 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:05:51.815 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.815 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.815 [2024-10-01 14:30:43.447646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:05:51.815 [2024-10-01 14:30:43.447743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.815 [2024-10-01 14:30:43.447765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:05:51.815 [2024-10-01 14:30:43.447778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.815 [2024-10-01 14:30:43.448319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.815 [2024-10-01 14:30:43.448348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:05:51.815 [2024-10-01 14:30:43.448440] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:05:51.815 [2024-10-01 14:30:43.448467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:05:51.815 [2024-10-01 14:30:43.448588] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:05:51.815 [2024-10-01 14:30:43.448600] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:05:51.815 [2024-10-01 14:30:43.448885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:05:51.815 [2024-10-01 14:30:43.449038] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:05:51.815 [2024-10-01 14:30:43.449046] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:05:51.816 [2024-10-01 14:30:43.449190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:51.816 pt2 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:51.816 "name": "raid_bdev1", 00:05:51.816 "uuid": "3c384abb-2876-4051-820c-45604498e9d5", 00:05:51.816 "strip_size_kb": 64, 00:05:51.816 "state": "online", 00:05:51.816 "raid_level": "raid0", 00:05:51.816 "superblock": true, 00:05:51.816 "num_base_bdevs": 2, 00:05:51.816 "num_base_bdevs_discovered": 2, 00:05:51.816 "num_base_bdevs_operational": 2, 00:05:51.816 "base_bdevs_list": [ 00:05:51.816 { 00:05:51.816 "name": "pt1", 00:05:51.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:51.816 "is_configured": true, 00:05:51.816 "data_offset": 2048, 00:05:51.816 "data_size": 63488 00:05:51.816 }, 00:05:51.816 { 00:05:51.816 "name": "pt2", 00:05:51.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:51.816 "is_configured": true, 00:05:51.816 "data_offset": 2048, 00:05:51.816 "data_size": 63488 00:05:51.816 } 00:05:51.816 ] 00:05:51.816 }' 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:51.816 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.390 [2024-10-01 14:30:43.780233] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:05:52.390 "name": "raid_bdev1", 00:05:52.390 "aliases": [ 00:05:52.390 "3c384abb-2876-4051-820c-45604498e9d5" 00:05:52.390 ], 00:05:52.390 "product_name": "Raid Volume", 00:05:52.390 "block_size": 512, 00:05:52.390 "num_blocks": 126976, 00:05:52.390 "uuid": "3c384abb-2876-4051-820c-45604498e9d5", 00:05:52.390 "assigned_rate_limits": { 00:05:52.390 "rw_ios_per_sec": 0, 00:05:52.390 "rw_mbytes_per_sec": 0, 00:05:52.390 "r_mbytes_per_sec": 0, 00:05:52.390 "w_mbytes_per_sec": 0 00:05:52.390 }, 00:05:52.390 "claimed": false, 00:05:52.390 "zoned": false, 00:05:52.390 "supported_io_types": { 00:05:52.390 "read": true, 00:05:52.390 "write": true, 00:05:52.390 "unmap": true, 00:05:52.390 "flush": true, 00:05:52.390 "reset": true, 00:05:52.390 "nvme_admin": false, 00:05:52.390 "nvme_io": false, 00:05:52.390 "nvme_io_md": false, 00:05:52.390 "write_zeroes": true, 00:05:52.390 "zcopy": false, 00:05:52.390 "get_zone_info": false, 00:05:52.390 "zone_management": false, 00:05:52.390 "zone_append": false, 00:05:52.390 "compare": false, 00:05:52.390 "compare_and_write": false, 00:05:52.390 "abort": false, 00:05:52.390 "seek_hole": false, 00:05:52.390 "seek_data": false, 00:05:52.390 "copy": false, 00:05:52.390 "nvme_iov_md": false 00:05:52.390 }, 00:05:52.390 "memory_domains": [ 00:05:52.390 { 00:05:52.390 "dma_device_id": "system", 00:05:52.390 "dma_device_type": 1 00:05:52.390 }, 00:05:52.390 { 00:05:52.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.390 "dma_device_type": 2 00:05:52.390 }, 00:05:52.390 { 00:05:52.390 "dma_device_id": "system", 00:05:52.390 "dma_device_type": 1 00:05:52.390 }, 00:05:52.390 { 00:05:52.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.390 "dma_device_type": 2 00:05:52.390 } 00:05:52.390 ], 00:05:52.390 "driver_specific": { 00:05:52.390 "raid": { 00:05:52.390 "uuid": "3c384abb-2876-4051-820c-45604498e9d5", 00:05:52.390 "strip_size_kb": 64, 00:05:52.390 "state": "online", 00:05:52.390 "raid_level": "raid0", 00:05:52.390 "superblock": true, 00:05:52.390 "num_base_bdevs": 2, 00:05:52.390 "num_base_bdevs_discovered": 2, 00:05:52.390 "num_base_bdevs_operational": 2, 00:05:52.390 "base_bdevs_list": [ 00:05:52.390 { 00:05:52.390 "name": "pt1", 00:05:52.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:05:52.390 "is_configured": true, 00:05:52.390 "data_offset": 2048, 00:05:52.390 "data_size": 63488 00:05:52.390 }, 00:05:52.390 { 00:05:52.390 "name": "pt2", 00:05:52.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:05:52.390 "is_configured": true, 00:05:52.390 "data_offset": 2048, 00:05:52.390 "data_size": 63488 00:05:52.390 } 00:05:52.390 ] 00:05:52.390 } 00:05:52.390 } 00:05:52.390 }' 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:05:52.390 pt2' 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.390 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.391 [2024-10-01 14:30:43.944089] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3c384abb-2876-4051-820c-45604498e9d5 '!=' 3c384abb-2876-4051-820c-45604498e9d5 ']' 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60045 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60045 ']' 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60045 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60045 00:05:52.391 killing process with pid 60045 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60045' 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 60045 00:05:52.391 [2024-10-01 14:30:43.996301] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:52.391 14:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 60045 00:05:52.391 [2024-10-01 14:30:43.996413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:52.391 [2024-10-01 14:30:43.996474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:52.391 [2024-10-01 14:30:43.996487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:05:52.651 [2024-10-01 14:30:44.143008] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:53.590 14:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:05:53.590 00:05:53.590 real 0m3.663s 00:05:53.590 user 0m4.923s 00:05:53.590 sys 0m0.623s 00:05:53.590 14:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.590 ************************************ 00:05:53.590 END TEST raid_superblock_test 00:05:53.590 ************************************ 00:05:53.590 14:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:05:53.590 14:30:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:05:53.590 14:30:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:53.590 14:30:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.590 14:30:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:53.590 ************************************ 00:05:53.590 START TEST raid_read_error_test 00:05:53.590 ************************************ 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KplswVAlhp 00:05:53.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60246 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60246 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 60246 ']' 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.590 14:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:53.590 [2024-10-01 14:30:45.266351] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:53.590 [2024-10-01 14:30:45.266882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60246 ] 00:05:53.850 [2024-10-01 14:30:45.433646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.109 [2024-10-01 14:30:45.742067] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.370 [2024-10-01 14:30:45.910473] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:54.370 [2024-10-01 14:30:45.910523] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.632 BaseBdev1_malloc 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.632 true 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.632 [2024-10-01 14:30:46.164509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:05:54.632 [2024-10-01 14:30:46.164595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.632 [2024-10-01 14:30:46.164619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:05:54.632 [2024-10-01 14:30:46.164632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.632 [2024-10-01 14:30:46.167276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.632 [2024-10-01 14:30:46.167338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:05:54.632 BaseBdev1 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.632 BaseBdev2_malloc 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.632 true 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.632 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.632 [2024-10-01 14:30:46.227375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:05:54.632 [2024-10-01 14:30:46.227643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.632 [2024-10-01 14:30:46.227676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:54.632 [2024-10-01 14:30:46.227689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.632 [2024-10-01 14:30:46.230306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.633 [2024-10-01 14:30:46.230371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:05:54.633 BaseBdev2 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.633 [2024-10-01 14:30:46.235468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:54.633 [2024-10-01 14:30:46.237745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:05:54.633 [2024-10-01 14:30:46.238000] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:05:54.633 [2024-10-01 14:30:46.238025] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:05:54.633 [2024-10-01 14:30:46.238345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:54.633 [2024-10-01 14:30:46.238522] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:05:54.633 [2024-10-01 14:30:46.238532] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:05:54.633 [2024-10-01 14:30:46.238759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:54.633 "name": "raid_bdev1", 00:05:54.633 "uuid": "5f50524d-09d2-4f6e-841f-52bd42a2709a", 00:05:54.633 "strip_size_kb": 64, 00:05:54.633 "state": "online", 00:05:54.633 "raid_level": "raid0", 00:05:54.633 "superblock": true, 00:05:54.633 "num_base_bdevs": 2, 00:05:54.633 "num_base_bdevs_discovered": 2, 00:05:54.633 "num_base_bdevs_operational": 2, 00:05:54.633 "base_bdevs_list": [ 00:05:54.633 { 00:05:54.633 "name": "BaseBdev1", 00:05:54.633 "uuid": "e60635d5-0d48-5ca8-b7c8-cf6b2ed3b59f", 00:05:54.633 "is_configured": true, 00:05:54.633 "data_offset": 2048, 00:05:54.633 "data_size": 63488 00:05:54.633 }, 00:05:54.633 { 00:05:54.633 "name": "BaseBdev2", 00:05:54.633 "uuid": "38279710-b64c-58ec-981b-d1199b0a677a", 00:05:54.633 "is_configured": true, 00:05:54.633 "data_offset": 2048, 00:05:54.633 "data_size": 63488 00:05:54.633 } 00:05:54.633 ] 00:05:54.633 }' 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:54.633 14:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.954 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:05:54.954 14:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:05:55.234 [2024-10-01 14:30:46.672659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:56.175 "name": "raid_bdev1", 00:05:56.175 "uuid": "5f50524d-09d2-4f6e-841f-52bd42a2709a", 00:05:56.175 "strip_size_kb": 64, 00:05:56.175 "state": "online", 00:05:56.175 "raid_level": "raid0", 00:05:56.175 "superblock": true, 00:05:56.175 "num_base_bdevs": 2, 00:05:56.175 "num_base_bdevs_discovered": 2, 00:05:56.175 "num_base_bdevs_operational": 2, 00:05:56.175 "base_bdevs_list": [ 00:05:56.175 { 00:05:56.175 "name": "BaseBdev1", 00:05:56.175 "uuid": "e60635d5-0d48-5ca8-b7c8-cf6b2ed3b59f", 00:05:56.175 "is_configured": true, 00:05:56.175 "data_offset": 2048, 00:05:56.175 "data_size": 63488 00:05:56.175 }, 00:05:56.175 { 00:05:56.175 "name": "BaseBdev2", 00:05:56.175 "uuid": "38279710-b64c-58ec-981b-d1199b0a677a", 00:05:56.175 "is_configured": true, 00:05:56.175 "data_offset": 2048, 00:05:56.175 "data_size": 63488 00:05:56.175 } 00:05:56.175 ] 00:05:56.175 }' 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:56.175 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.435 [2024-10-01 14:30:47.887365] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:05:56.435 [2024-10-01 14:30:47.887417] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:05:56.435 [2024-10-01 14:30:47.890801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:05:56.435 [2024-10-01 14:30:47.890990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:56.435 [2024-10-01 14:30:47.891061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:05:56.435 [2024-10-01 14:30:47.891149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:05:56.435 { 00:05:56.435 "results": [ 00:05:56.435 { 00:05:56.435 "job": "raid_bdev1", 00:05:56.435 "core_mask": "0x1", 00:05:56.435 "workload": "randrw", 00:05:56.435 "percentage": 50, 00:05:56.435 "status": "finished", 00:05:56.435 "queue_depth": 1, 00:05:56.435 "io_size": 131072, 00:05:56.435 "runtime": 1.212457, 00:05:56.435 "iops": 12400.439768173223, 00:05:56.435 "mibps": 1550.0549710216528, 00:05:56.435 "io_failed": 1, 00:05:56.435 "io_timeout": 0, 00:05:56.435 "avg_latency_us": 112.26608058608058, 00:05:56.435 "min_latency_us": 33.28, 00:05:56.435 "max_latency_us": 1940.8738461538462 00:05:56.435 } 00:05:56.435 ], 00:05:56.435 "core_count": 1 00:05:56.435 } 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60246 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 60246 ']' 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 60246 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60246 00:05:56.435 killing process with pid 60246 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60246' 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 60246 00:05:56.435 14:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 60246 00:05:56.435 [2024-10-01 14:30:47.923301] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:05:56.435 [2024-10-01 14:30:48.025382] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:05:57.378 14:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:05:57.378 14:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KplswVAlhp 00:05:57.378 14:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:05:57.378 14:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.82 00:05:57.378 14:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:05:57.378 14:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:05:57.378 14:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:05:57.378 14:30:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.82 != \0\.\0\0 ]] 00:05:57.378 00:05:57.378 real 0m3.867s 00:05:57.378 user 0m4.424s 00:05:57.378 sys 0m0.554s 00:05:57.378 ************************************ 00:05:57.378 END TEST raid_read_error_test 00:05:57.378 ************************************ 00:05:57.378 14:30:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.378 14:30:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.640 14:30:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:05:57.640 14:30:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:57.640 14:30:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.640 14:30:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:05:57.640 ************************************ 00:05:57.640 START TEST raid_write_error_test 00:05:57.640 ************************************ 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:05:57.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wjUHddCmQM 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60386 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60386 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 60386 ']' 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.640 14:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.640 [2024-10-01 14:30:49.167643] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:57.640 [2024-10-01 14:30:49.167836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60386 ] 00:05:57.901 [2024-10-01 14:30:49.323964] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.161 [2024-10-01 14:30:49.593156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.161 [2024-10-01 14:30:49.762388] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:58.161 [2024-10-01 14:30:49.762447] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:05:58.420 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.420 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:05:58.420 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:05:58.420 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:05:58.420 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.420 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.420 BaseBdev1_malloc 00:05:58.420 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.420 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:05:58.421 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.421 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.421 true 00:05:58.421 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.421 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:05:58.421 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.421 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.421 [2024-10-01 14:30:50.102665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:05:58.421 [2024-10-01 14:30:50.102784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.421 [2024-10-01 14:30:50.102811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:05:58.421 [2024-10-01 14:30:50.102826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.681 [2024-10-01 14:30:50.105510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.681 [2024-10-01 14:30:50.105592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:05:58.681 BaseBdev1 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.681 BaseBdev2_malloc 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.681 true 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.681 [2024-10-01 14:30:50.170244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:05:58.681 [2024-10-01 14:30:50.170531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.681 [2024-10-01 14:30:50.170573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:58.681 [2024-10-01 14:30:50.170587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.681 [2024-10-01 14:30:50.173373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.681 [2024-10-01 14:30:50.173439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:05:58.681 BaseBdev2 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.681 [2024-10-01 14:30:50.178410] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:05:58.681 [2024-10-01 14:30:50.180787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:05:58.681 [2024-10-01 14:30:50.181255] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:05:58.681 [2024-10-01 14:30:50.181286] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:05:58.681 [2024-10-01 14:30:50.181646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:05:58.681 [2024-10-01 14:30:50.181917] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:05:58.681 [2024-10-01 14:30:50.181937] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:05:58.681 [2024-10-01 14:30:50.182300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.681 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:05:58.681 "name": "raid_bdev1", 00:05:58.681 "uuid": "30d0769a-82da-4a25-a1ee-96867a3a9779", 00:05:58.681 "strip_size_kb": 64, 00:05:58.681 "state": "online", 00:05:58.681 "raid_level": "raid0", 00:05:58.681 "superblock": true, 00:05:58.681 "num_base_bdevs": 2, 00:05:58.681 "num_base_bdevs_discovered": 2, 00:05:58.681 "num_base_bdevs_operational": 2, 00:05:58.681 "base_bdevs_list": [ 00:05:58.682 { 00:05:58.682 "name": "BaseBdev1", 00:05:58.682 "uuid": "0fc777bc-4d3e-5bed-81f2-6fd061eec615", 00:05:58.682 "is_configured": true, 00:05:58.682 "data_offset": 2048, 00:05:58.682 "data_size": 63488 00:05:58.682 }, 00:05:58.682 { 00:05:58.682 "name": "BaseBdev2", 00:05:58.682 "uuid": "f9c0658d-56e0-5297-822b-c8f920a69edc", 00:05:58.682 "is_configured": true, 00:05:58.682 "data_offset": 2048, 00:05:58.682 "data_size": 63488 00:05:58.682 } 00:05:58.682 ] 00:05:58.682 }' 00:05:58.682 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:05:58.682 14:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.942 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:05:58.942 14:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:05:58.942 [2024-10-01 14:30:50.619647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.885 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.144 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:00.144 "name": "raid_bdev1", 00:06:00.144 "uuid": "30d0769a-82da-4a25-a1ee-96867a3a9779", 00:06:00.144 "strip_size_kb": 64, 00:06:00.144 "state": "online", 00:06:00.144 "raid_level": "raid0", 00:06:00.144 "superblock": true, 00:06:00.144 "num_base_bdevs": 2, 00:06:00.144 "num_base_bdevs_discovered": 2, 00:06:00.144 "num_base_bdevs_operational": 2, 00:06:00.144 "base_bdevs_list": [ 00:06:00.144 { 00:06:00.144 "name": "BaseBdev1", 00:06:00.144 "uuid": "0fc777bc-4d3e-5bed-81f2-6fd061eec615", 00:06:00.144 "is_configured": true, 00:06:00.144 "data_offset": 2048, 00:06:00.144 "data_size": 63488 00:06:00.144 }, 00:06:00.144 { 00:06:00.144 "name": "BaseBdev2", 00:06:00.144 "uuid": "f9c0658d-56e0-5297-822b-c8f920a69edc", 00:06:00.144 "is_configured": true, 00:06:00.144 "data_offset": 2048, 00:06:00.144 "data_size": 63488 00:06:00.144 } 00:06:00.144 ] 00:06:00.144 }' 00:06:00.144 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:00.144 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.404 [2024-10-01 14:30:51.887853] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:00.404 [2024-10-01 14:30:51.888113] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:00.404 [2024-10-01 14:30:51.891395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:00.404 [2024-10-01 14:30:51.891610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:00.404 [2024-10-01 14:30:51.891682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:00.404 [2024-10-01 14:30:51.891789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:00.404 { 00:06:00.404 "results": [ 00:06:00.404 { 00:06:00.404 "job": "raid_bdev1", 00:06:00.404 "core_mask": "0x1", 00:06:00.404 "workload": "randrw", 00:06:00.404 "percentage": 50, 00:06:00.404 "status": "finished", 00:06:00.404 "queue_depth": 1, 00:06:00.404 "io_size": 131072, 00:06:00.404 "runtime": 1.266126, 00:06:00.404 "iops": 12174.933616401528, 00:06:00.404 "mibps": 1521.866702050191, 00:06:00.404 "io_failed": 1, 00:06:00.404 "io_timeout": 0, 00:06:00.404 "avg_latency_us": 114.52102670552075, 00:06:00.404 "min_latency_us": 33.28, 00:06:00.404 "max_latency_us": 1739.2246153846154 00:06:00.404 } 00:06:00.404 ], 00:06:00.404 "core_count": 1 00:06:00.404 } 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60386 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 60386 ']' 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 60386 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60386 00:06:00.404 killing process with pid 60386 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60386' 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 60386 00:06:00.404 14:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 60386 00:06:00.404 [2024-10-01 14:30:51.922526] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:00.404 [2024-10-01 14:30:52.022565] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:01.841 14:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wjUHddCmQM 00:06:01.841 14:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:01.841 14:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:01.841 ************************************ 00:06:01.841 END TEST raid_write_error_test 00:06:01.841 ************************************ 00:06:01.841 14:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:06:01.841 14:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:01.841 14:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:01.841 14:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:01.841 14:30:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:06:01.841 00:06:01.841 real 0m3.959s 00:06:01.841 user 0m4.588s 00:06:01.841 sys 0m0.532s 00:06:01.841 14:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.841 14:30:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.841 14:30:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:01.841 14:30:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:06:01.841 14:30:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:01.841 14:30:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.841 14:30:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:01.841 ************************************ 00:06:01.841 START TEST raid_state_function_test 00:06:01.841 ************************************ 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:01.841 Process raid pid: 60524 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60524 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60524' 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60524 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60524 ']' 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.841 14:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.841 [2024-10-01 14:30:53.214023] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:01.841 [2024-10-01 14:30:53.215237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:01.841 [2024-10-01 14:30:53.384961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.105 [2024-10-01 14:30:53.657005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.365 [2024-10-01 14:30:53.834936] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:02.365 [2024-10-01 14:30:53.834997] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.626 [2024-10-01 14:30:54.169014] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:02.626 [2024-10-01 14:30:54.169101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:02.626 [2024-10-01 14:30:54.169113] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:02.626 [2024-10-01 14:30:54.169124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:02.626 "name": "Existed_Raid", 00:06:02.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:02.626 "strip_size_kb": 64, 00:06:02.626 "state": "configuring", 00:06:02.626 "raid_level": "concat", 00:06:02.626 "superblock": false, 00:06:02.626 "num_base_bdevs": 2, 00:06:02.626 "num_base_bdevs_discovered": 0, 00:06:02.626 "num_base_bdevs_operational": 2, 00:06:02.626 "base_bdevs_list": [ 00:06:02.626 { 00:06:02.626 "name": "BaseBdev1", 00:06:02.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:02.626 "is_configured": false, 00:06:02.626 "data_offset": 0, 00:06:02.626 "data_size": 0 00:06:02.626 }, 00:06:02.626 { 00:06:02.626 "name": "BaseBdev2", 00:06:02.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:02.626 "is_configured": false, 00:06:02.626 "data_offset": 0, 00:06:02.626 "data_size": 0 00:06:02.626 } 00:06:02.626 ] 00:06:02.626 }' 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:02.626 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.888 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:02.888 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.888 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.888 [2024-10-01 14:30:54.512982] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:02.888 [2024-10-01 14:30:54.513043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:02.888 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.888 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:02.888 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.888 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.888 [2024-10-01 14:30:54.521008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:02.888 [2024-10-01 14:30:54.521081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:02.888 [2024-10-01 14:30:54.521091] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:02.888 [2024-10-01 14:30:54.521105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:02.888 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.888 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:02.888 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.888 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.149 [2024-10-01 14:30:54.574636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:03.149 BaseBdev1 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.149 [ 00:06:03.149 { 00:06:03.149 "name": "BaseBdev1", 00:06:03.149 "aliases": [ 00:06:03.149 "c2282105-d10b-4f55-9bdb-9642109b5bf8" 00:06:03.149 ], 00:06:03.149 "product_name": "Malloc disk", 00:06:03.149 "block_size": 512, 00:06:03.149 "num_blocks": 65536, 00:06:03.149 "uuid": "c2282105-d10b-4f55-9bdb-9642109b5bf8", 00:06:03.149 "assigned_rate_limits": { 00:06:03.149 "rw_ios_per_sec": 0, 00:06:03.149 "rw_mbytes_per_sec": 0, 00:06:03.149 "r_mbytes_per_sec": 0, 00:06:03.149 "w_mbytes_per_sec": 0 00:06:03.149 }, 00:06:03.149 "claimed": true, 00:06:03.149 "claim_type": "exclusive_write", 00:06:03.149 "zoned": false, 00:06:03.149 "supported_io_types": { 00:06:03.149 "read": true, 00:06:03.149 "write": true, 00:06:03.149 "unmap": true, 00:06:03.149 "flush": true, 00:06:03.149 "reset": true, 00:06:03.149 "nvme_admin": false, 00:06:03.149 "nvme_io": false, 00:06:03.149 "nvme_io_md": false, 00:06:03.149 "write_zeroes": true, 00:06:03.149 "zcopy": true, 00:06:03.149 "get_zone_info": false, 00:06:03.149 "zone_management": false, 00:06:03.149 "zone_append": false, 00:06:03.149 "compare": false, 00:06:03.149 "compare_and_write": false, 00:06:03.149 "abort": true, 00:06:03.149 "seek_hole": false, 00:06:03.149 "seek_data": false, 00:06:03.149 "copy": true, 00:06:03.149 "nvme_iov_md": false 00:06:03.149 }, 00:06:03.149 "memory_domains": [ 00:06:03.149 { 00:06:03.149 "dma_device_id": "system", 00:06:03.149 "dma_device_type": 1 00:06:03.149 }, 00:06:03.149 { 00:06:03.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.149 "dma_device_type": 2 00:06:03.149 } 00:06:03.149 ], 00:06:03.149 "driver_specific": {} 00:06:03.149 } 00:06:03.149 ] 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:03.149 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:03.150 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:03.150 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:03.150 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:03.150 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:03.150 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:03.150 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:03.150 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.150 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.150 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.150 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:03.150 "name": "Existed_Raid", 00:06:03.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:03.150 "strip_size_kb": 64, 00:06:03.150 "state": "configuring", 00:06:03.150 "raid_level": "concat", 00:06:03.150 "superblock": false, 00:06:03.150 "num_base_bdevs": 2, 00:06:03.150 "num_base_bdevs_discovered": 1, 00:06:03.150 "num_base_bdevs_operational": 2, 00:06:03.150 "base_bdevs_list": [ 00:06:03.150 { 00:06:03.150 "name": "BaseBdev1", 00:06:03.150 "uuid": "c2282105-d10b-4f55-9bdb-9642109b5bf8", 00:06:03.150 "is_configured": true, 00:06:03.150 "data_offset": 0, 00:06:03.150 "data_size": 65536 00:06:03.150 }, 00:06:03.150 { 00:06:03.150 "name": "BaseBdev2", 00:06:03.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:03.150 "is_configured": false, 00:06:03.150 "data_offset": 0, 00:06:03.150 "data_size": 0 00:06:03.150 } 00:06:03.150 ] 00:06:03.150 }' 00:06:03.150 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:03.150 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.409 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:03.409 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.409 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.409 [2024-10-01 14:30:54.942799] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:03.409 [2024-10-01 14:30:54.942873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:03.409 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.409 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:03.409 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.409 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.409 [2024-10-01 14:30:54.950844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:03.409 [2024-10-01 14:30:54.953119] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:03.409 [2024-10-01 14:30:54.953189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.410 14:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.410 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:03.410 "name": "Existed_Raid", 00:06:03.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:03.410 "strip_size_kb": 64, 00:06:03.410 "state": "configuring", 00:06:03.410 "raid_level": "concat", 00:06:03.410 "superblock": false, 00:06:03.410 "num_base_bdevs": 2, 00:06:03.410 "num_base_bdevs_discovered": 1, 00:06:03.410 "num_base_bdevs_operational": 2, 00:06:03.410 "base_bdevs_list": [ 00:06:03.410 { 00:06:03.410 "name": "BaseBdev1", 00:06:03.410 "uuid": "c2282105-d10b-4f55-9bdb-9642109b5bf8", 00:06:03.410 "is_configured": true, 00:06:03.410 "data_offset": 0, 00:06:03.410 "data_size": 65536 00:06:03.410 }, 00:06:03.410 { 00:06:03.410 "name": "BaseBdev2", 00:06:03.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:03.410 "is_configured": false, 00:06:03.410 "data_offset": 0, 00:06:03.410 "data_size": 0 00:06:03.410 } 00:06:03.410 ] 00:06:03.410 }' 00:06:03.410 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:03.410 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.672 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:03.672 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.672 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.933 [2024-10-01 14:30:55.364931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:03.933 [2024-10-01 14:30:55.365010] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:03.933 [2024-10-01 14:30:55.365020] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:03.933 [2024-10-01 14:30:55.365337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:03.933 [2024-10-01 14:30:55.365508] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:03.933 [2024-10-01 14:30:55.365521] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:03.933 [2024-10-01 14:30:55.365899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:03.933 BaseBdev2 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.933 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.933 [ 00:06:03.933 { 00:06:03.933 "name": "BaseBdev2", 00:06:03.933 "aliases": [ 00:06:03.933 "b0680c93-1bb1-4a68-95fc-1ff9f9d2dac0" 00:06:03.933 ], 00:06:03.933 "product_name": "Malloc disk", 00:06:03.933 "block_size": 512, 00:06:03.933 "num_blocks": 65536, 00:06:03.933 "uuid": "b0680c93-1bb1-4a68-95fc-1ff9f9d2dac0", 00:06:03.933 "assigned_rate_limits": { 00:06:03.933 "rw_ios_per_sec": 0, 00:06:03.933 "rw_mbytes_per_sec": 0, 00:06:03.933 "r_mbytes_per_sec": 0, 00:06:03.933 "w_mbytes_per_sec": 0 00:06:03.933 }, 00:06:03.933 "claimed": true, 00:06:03.933 "claim_type": "exclusive_write", 00:06:03.933 "zoned": false, 00:06:03.933 "supported_io_types": { 00:06:03.933 "read": true, 00:06:03.934 "write": true, 00:06:03.934 "unmap": true, 00:06:03.934 "flush": true, 00:06:03.934 "reset": true, 00:06:03.934 "nvme_admin": false, 00:06:03.934 "nvme_io": false, 00:06:03.934 "nvme_io_md": false, 00:06:03.934 "write_zeroes": true, 00:06:03.934 "zcopy": true, 00:06:03.934 "get_zone_info": false, 00:06:03.934 "zone_management": false, 00:06:03.934 "zone_append": false, 00:06:03.934 "compare": false, 00:06:03.934 "compare_and_write": false, 00:06:03.934 "abort": true, 00:06:03.934 "seek_hole": false, 00:06:03.934 "seek_data": false, 00:06:03.934 "copy": true, 00:06:03.934 "nvme_iov_md": false 00:06:03.934 }, 00:06:03.934 "memory_domains": [ 00:06:03.934 { 00:06:03.934 "dma_device_id": "system", 00:06:03.934 "dma_device_type": 1 00:06:03.934 }, 00:06:03.934 { 00:06:03.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.934 "dma_device_type": 2 00:06:03.934 } 00:06:03.934 ], 00:06:03.934 "driver_specific": {} 00:06:03.934 } 00:06:03.934 ] 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:03.934 "name": "Existed_Raid", 00:06:03.934 "uuid": "c6b83f11-8b5e-449f-ac64-8c83963f8a35", 00:06:03.934 "strip_size_kb": 64, 00:06:03.934 "state": "online", 00:06:03.934 "raid_level": "concat", 00:06:03.934 "superblock": false, 00:06:03.934 "num_base_bdevs": 2, 00:06:03.934 "num_base_bdevs_discovered": 2, 00:06:03.934 "num_base_bdevs_operational": 2, 00:06:03.934 "base_bdevs_list": [ 00:06:03.934 { 00:06:03.934 "name": "BaseBdev1", 00:06:03.934 "uuid": "c2282105-d10b-4f55-9bdb-9642109b5bf8", 00:06:03.934 "is_configured": true, 00:06:03.934 "data_offset": 0, 00:06:03.934 "data_size": 65536 00:06:03.934 }, 00:06:03.934 { 00:06:03.934 "name": "BaseBdev2", 00:06:03.934 "uuid": "b0680c93-1bb1-4a68-95fc-1ff9f9d2dac0", 00:06:03.934 "is_configured": true, 00:06:03.934 "data_offset": 0, 00:06:03.934 "data_size": 65536 00:06:03.934 } 00:06:03.934 ] 00:06:03.934 }' 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:03.934 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.196 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:04.196 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:04.196 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:04.196 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:04.196 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:04.196 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:04.196 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:04.196 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:04.196 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.196 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.196 [2024-10-01 14:30:55.769442] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:04.196 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.196 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:04.196 "name": "Existed_Raid", 00:06:04.196 "aliases": [ 00:06:04.196 "c6b83f11-8b5e-449f-ac64-8c83963f8a35" 00:06:04.196 ], 00:06:04.196 "product_name": "Raid Volume", 00:06:04.196 "block_size": 512, 00:06:04.196 "num_blocks": 131072, 00:06:04.196 "uuid": "c6b83f11-8b5e-449f-ac64-8c83963f8a35", 00:06:04.196 "assigned_rate_limits": { 00:06:04.196 "rw_ios_per_sec": 0, 00:06:04.196 "rw_mbytes_per_sec": 0, 00:06:04.196 "r_mbytes_per_sec": 0, 00:06:04.196 "w_mbytes_per_sec": 0 00:06:04.196 }, 00:06:04.196 "claimed": false, 00:06:04.196 "zoned": false, 00:06:04.196 "supported_io_types": { 00:06:04.196 "read": true, 00:06:04.196 "write": true, 00:06:04.196 "unmap": true, 00:06:04.196 "flush": true, 00:06:04.196 "reset": true, 00:06:04.196 "nvme_admin": false, 00:06:04.196 "nvme_io": false, 00:06:04.196 "nvme_io_md": false, 00:06:04.196 "write_zeroes": true, 00:06:04.196 "zcopy": false, 00:06:04.196 "get_zone_info": false, 00:06:04.196 "zone_management": false, 00:06:04.196 "zone_append": false, 00:06:04.196 "compare": false, 00:06:04.196 "compare_and_write": false, 00:06:04.196 "abort": false, 00:06:04.196 "seek_hole": false, 00:06:04.196 "seek_data": false, 00:06:04.196 "copy": false, 00:06:04.196 "nvme_iov_md": false 00:06:04.196 }, 00:06:04.196 "memory_domains": [ 00:06:04.196 { 00:06:04.196 "dma_device_id": "system", 00:06:04.196 "dma_device_type": 1 00:06:04.196 }, 00:06:04.196 { 00:06:04.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.196 "dma_device_type": 2 00:06:04.196 }, 00:06:04.196 { 00:06:04.196 "dma_device_id": "system", 00:06:04.196 "dma_device_type": 1 00:06:04.196 }, 00:06:04.196 { 00:06:04.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.196 "dma_device_type": 2 00:06:04.196 } 00:06:04.196 ], 00:06:04.196 "driver_specific": { 00:06:04.196 "raid": { 00:06:04.196 "uuid": "c6b83f11-8b5e-449f-ac64-8c83963f8a35", 00:06:04.196 "strip_size_kb": 64, 00:06:04.196 "state": "online", 00:06:04.196 "raid_level": "concat", 00:06:04.196 "superblock": false, 00:06:04.196 "num_base_bdevs": 2, 00:06:04.196 "num_base_bdevs_discovered": 2, 00:06:04.196 "num_base_bdevs_operational": 2, 00:06:04.197 "base_bdevs_list": [ 00:06:04.197 { 00:06:04.197 "name": "BaseBdev1", 00:06:04.197 "uuid": "c2282105-d10b-4f55-9bdb-9642109b5bf8", 00:06:04.197 "is_configured": true, 00:06:04.197 "data_offset": 0, 00:06:04.197 "data_size": 65536 00:06:04.197 }, 00:06:04.197 { 00:06:04.197 "name": "BaseBdev2", 00:06:04.197 "uuid": "b0680c93-1bb1-4a68-95fc-1ff9f9d2dac0", 00:06:04.197 "is_configured": true, 00:06:04.197 "data_offset": 0, 00:06:04.197 "data_size": 65536 00:06:04.197 } 00:06:04.197 ] 00:06:04.197 } 00:06:04.197 } 00:06:04.197 }' 00:06:04.197 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:04.197 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:04.197 BaseBdev2' 00:06:04.197 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.457 14:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.457 [2024-10-01 14:30:55.965245] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:04.457 [2024-10-01 14:30:55.965305] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:04.457 [2024-10-01 14:30:55.965369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:04.457 "name": "Existed_Raid", 00:06:04.457 "uuid": "c6b83f11-8b5e-449f-ac64-8c83963f8a35", 00:06:04.457 "strip_size_kb": 64, 00:06:04.457 "state": "offline", 00:06:04.457 "raid_level": "concat", 00:06:04.457 "superblock": false, 00:06:04.457 "num_base_bdevs": 2, 00:06:04.457 "num_base_bdevs_discovered": 1, 00:06:04.457 "num_base_bdevs_operational": 1, 00:06:04.457 "base_bdevs_list": [ 00:06:04.457 { 00:06:04.457 "name": null, 00:06:04.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:04.457 "is_configured": false, 00:06:04.457 "data_offset": 0, 00:06:04.457 "data_size": 65536 00:06:04.457 }, 00:06:04.457 { 00:06:04.457 "name": "BaseBdev2", 00:06:04.457 "uuid": "b0680c93-1bb1-4a68-95fc-1ff9f9d2dac0", 00:06:04.457 "is_configured": true, 00:06:04.457 "data_offset": 0, 00:06:04.457 "data_size": 65536 00:06:04.457 } 00:06:04.457 ] 00:06:04.457 }' 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:04.457 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.716 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:04.716 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:04.716 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:04.716 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.716 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:04.716 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.977 [2024-10-01 14:30:56.439198] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:04.977 [2024-10-01 14:30:56.439279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60524 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60524 ']' 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60524 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60524 00:06:04.977 killing process with pid 60524 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60524' 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60524 00:06:04.977 14:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60524 00:06:04.977 [2024-10-01 14:30:56.572302] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:04.977 [2024-10-01 14:30:56.584589] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:06.358 14:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:06.358 00:06:06.358 real 0m4.512s 00:06:06.358 user 0m6.266s 00:06:06.358 sys 0m0.809s 00:06:06.358 14:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.359 ************************************ 00:06:06.359 END TEST raid_state_function_test 00:06:06.359 ************************************ 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.359 14:30:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:06:06.359 14:30:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:06.359 14:30:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.359 14:30:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:06.359 ************************************ 00:06:06.359 START TEST raid_state_function_test_sb 00:06:06.359 ************************************ 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:06.359 Process raid pid: 60767 00:06:06.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60767 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60767' 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60767 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 60767 ']' 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.359 14:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:06.359 [2024-10-01 14:30:57.778189] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:06.359 [2024-10-01 14:30:57.778612] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:06.359 [2024-10-01 14:30:57.933852] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.619 [2024-10-01 14:30:58.205636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.879 [2024-10-01 14:30:58.384907] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:06.879 [2024-10-01 14:30:58.384968] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:07.141 14:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.141 14:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:06:07.141 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:07.141 14:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.141 14:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:07.141 [2024-10-01 14:30:58.716905] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:07.141 [2024-10-01 14:30:58.716990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:07.141 [2024-10-01 14:30:58.717002] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:07.141 [2024-10-01 14:30:58.717013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:07.141 14:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.141 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:07.141 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:07.141 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:07.141 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:07.141 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:07.142 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:07.142 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:07.142 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:07.142 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:07.142 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:07.142 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:07.142 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:07.142 14:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.142 14:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:07.142 14:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.142 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:07.142 "name": "Existed_Raid", 00:06:07.142 "uuid": "a9eecd61-b9c1-4dc2-8510-73a138ed2e21", 00:06:07.142 "strip_size_kb": 64, 00:06:07.142 "state": "configuring", 00:06:07.142 "raid_level": "concat", 00:06:07.142 "superblock": true, 00:06:07.142 "num_base_bdevs": 2, 00:06:07.142 "num_base_bdevs_discovered": 0, 00:06:07.142 "num_base_bdevs_operational": 2, 00:06:07.142 "base_bdevs_list": [ 00:06:07.142 { 00:06:07.142 "name": "BaseBdev1", 00:06:07.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:07.142 "is_configured": false, 00:06:07.142 "data_offset": 0, 00:06:07.142 "data_size": 0 00:06:07.142 }, 00:06:07.142 { 00:06:07.142 "name": "BaseBdev2", 00:06:07.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:07.142 "is_configured": false, 00:06:07.142 "data_offset": 0, 00:06:07.142 "data_size": 0 00:06:07.142 } 00:06:07.142 ] 00:06:07.142 }' 00:06:07.142 14:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:07.142 14:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:07.717 [2024-10-01 14:30:59.104894] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:07.717 [2024-10-01 14:30:59.104951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:07.717 [2024-10-01 14:30:59.112928] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:07.717 [2024-10-01 14:30:59.112993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:07.717 [2024-10-01 14:30:59.113003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:07.717 [2024-10-01 14:30:59.113016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:07.717 [2024-10-01 14:30:59.168527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:07.717 BaseBdev1 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:07.717 [ 00:06:07.717 { 00:06:07.717 "name": "BaseBdev1", 00:06:07.717 "aliases": [ 00:06:07.717 "860400f5-ed08-4f17-99d4-701508c08f34" 00:06:07.717 ], 00:06:07.717 "product_name": "Malloc disk", 00:06:07.717 "block_size": 512, 00:06:07.717 "num_blocks": 65536, 00:06:07.717 "uuid": "860400f5-ed08-4f17-99d4-701508c08f34", 00:06:07.717 "assigned_rate_limits": { 00:06:07.717 "rw_ios_per_sec": 0, 00:06:07.717 "rw_mbytes_per_sec": 0, 00:06:07.717 "r_mbytes_per_sec": 0, 00:06:07.717 "w_mbytes_per_sec": 0 00:06:07.717 }, 00:06:07.717 "claimed": true, 00:06:07.717 "claim_type": "exclusive_write", 00:06:07.717 "zoned": false, 00:06:07.717 "supported_io_types": { 00:06:07.717 "read": true, 00:06:07.717 "write": true, 00:06:07.717 "unmap": true, 00:06:07.717 "flush": true, 00:06:07.717 "reset": true, 00:06:07.717 "nvme_admin": false, 00:06:07.717 "nvme_io": false, 00:06:07.717 "nvme_io_md": false, 00:06:07.717 "write_zeroes": true, 00:06:07.717 "zcopy": true, 00:06:07.717 "get_zone_info": false, 00:06:07.717 "zone_management": false, 00:06:07.717 "zone_append": false, 00:06:07.717 "compare": false, 00:06:07.717 "compare_and_write": false, 00:06:07.717 "abort": true, 00:06:07.717 "seek_hole": false, 00:06:07.717 "seek_data": false, 00:06:07.717 "copy": true, 00:06:07.717 "nvme_iov_md": false 00:06:07.717 }, 00:06:07.717 "memory_domains": [ 00:06:07.717 { 00:06:07.717 "dma_device_id": "system", 00:06:07.717 "dma_device_type": 1 00:06:07.717 }, 00:06:07.717 { 00:06:07.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.717 "dma_device_type": 2 00:06:07.717 } 00:06:07.717 ], 00:06:07.717 "driver_specific": {} 00:06:07.717 } 00:06:07.717 ] 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:07.717 "name": "Existed_Raid", 00:06:07.717 "uuid": "74baaeb6-d9eb-491e-a268-d4c1f93f6540", 00:06:07.717 "strip_size_kb": 64, 00:06:07.717 "state": "configuring", 00:06:07.717 "raid_level": "concat", 00:06:07.717 "superblock": true, 00:06:07.717 "num_base_bdevs": 2, 00:06:07.717 "num_base_bdevs_discovered": 1, 00:06:07.717 "num_base_bdevs_operational": 2, 00:06:07.717 "base_bdevs_list": [ 00:06:07.717 { 00:06:07.717 "name": "BaseBdev1", 00:06:07.717 "uuid": "860400f5-ed08-4f17-99d4-701508c08f34", 00:06:07.717 "is_configured": true, 00:06:07.717 "data_offset": 2048, 00:06:07.717 "data_size": 63488 00:06:07.717 }, 00:06:07.717 { 00:06:07.717 "name": "BaseBdev2", 00:06:07.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:07.717 "is_configured": false, 00:06:07.717 "data_offset": 0, 00:06:07.717 "data_size": 0 00:06:07.717 } 00:06:07.717 ] 00:06:07.717 }' 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:07.717 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:07.979 [2024-10-01 14:30:59.540680] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:07.979 [2024-10-01 14:30:59.541002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:07.979 [2024-10-01 14:30:59.552896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:07.979 [2024-10-01 14:30:59.555571] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:07.979 [2024-10-01 14:30:59.555656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:07.979 "name": "Existed_Raid", 00:06:07.979 "uuid": "4c8c712c-5f81-442f-973b-f2d610dd4abe", 00:06:07.979 "strip_size_kb": 64, 00:06:07.979 "state": "configuring", 00:06:07.979 "raid_level": "concat", 00:06:07.979 "superblock": true, 00:06:07.979 "num_base_bdevs": 2, 00:06:07.979 "num_base_bdevs_discovered": 1, 00:06:07.979 "num_base_bdevs_operational": 2, 00:06:07.979 "base_bdevs_list": [ 00:06:07.979 { 00:06:07.979 "name": "BaseBdev1", 00:06:07.979 "uuid": "860400f5-ed08-4f17-99d4-701508c08f34", 00:06:07.979 "is_configured": true, 00:06:07.979 "data_offset": 2048, 00:06:07.979 "data_size": 63488 00:06:07.979 }, 00:06:07.979 { 00:06:07.979 "name": "BaseBdev2", 00:06:07.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:07.979 "is_configured": false, 00:06:07.979 "data_offset": 0, 00:06:07.979 "data_size": 0 00:06:07.979 } 00:06:07.979 ] 00:06:07.979 }' 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:07.979 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:08.549 [2024-10-01 14:30:59.958546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:08.549 [2024-10-01 14:30:59.959204] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:08.549 [2024-10-01 14:30:59.959237] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:08.549 [2024-10-01 14:30:59.959560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:08.549 [2024-10-01 14:30:59.959743] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:08.549 [2024-10-01 14:30:59.959759] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:08.549 [2024-10-01 14:30:59.959913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:08.549 BaseBdev2 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:08.549 [ 00:06:08.549 { 00:06:08.549 "name": "BaseBdev2", 00:06:08.549 "aliases": [ 00:06:08.549 "7b3d5d04-da63-44ca-bc4a-f78a18deaa7a" 00:06:08.549 ], 00:06:08.549 "product_name": "Malloc disk", 00:06:08.549 "block_size": 512, 00:06:08.549 "num_blocks": 65536, 00:06:08.549 "uuid": "7b3d5d04-da63-44ca-bc4a-f78a18deaa7a", 00:06:08.549 "assigned_rate_limits": { 00:06:08.549 "rw_ios_per_sec": 0, 00:06:08.549 "rw_mbytes_per_sec": 0, 00:06:08.549 "r_mbytes_per_sec": 0, 00:06:08.549 "w_mbytes_per_sec": 0 00:06:08.549 }, 00:06:08.549 "claimed": true, 00:06:08.549 "claim_type": "exclusive_write", 00:06:08.549 "zoned": false, 00:06:08.549 "supported_io_types": { 00:06:08.549 "read": true, 00:06:08.549 "write": true, 00:06:08.549 "unmap": true, 00:06:08.549 "flush": true, 00:06:08.549 "reset": true, 00:06:08.549 "nvme_admin": false, 00:06:08.549 "nvme_io": false, 00:06:08.549 "nvme_io_md": false, 00:06:08.549 "write_zeroes": true, 00:06:08.549 "zcopy": true, 00:06:08.549 "get_zone_info": false, 00:06:08.549 "zone_management": false, 00:06:08.549 "zone_append": false, 00:06:08.549 "compare": false, 00:06:08.549 "compare_and_write": false, 00:06:08.549 "abort": true, 00:06:08.549 "seek_hole": false, 00:06:08.549 "seek_data": false, 00:06:08.549 "copy": true, 00:06:08.549 "nvme_iov_md": false 00:06:08.549 }, 00:06:08.549 "memory_domains": [ 00:06:08.549 { 00:06:08.549 "dma_device_id": "system", 00:06:08.549 "dma_device_type": 1 00:06:08.549 }, 00:06:08.549 { 00:06:08.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.549 "dma_device_type": 2 00:06:08.549 } 00:06:08.549 ], 00:06:08.549 "driver_specific": {} 00:06:08.549 } 00:06:08.549 ] 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.549 14:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:08.549 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.549 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:08.549 "name": "Existed_Raid", 00:06:08.549 "uuid": "4c8c712c-5f81-442f-973b-f2d610dd4abe", 00:06:08.549 "strip_size_kb": 64, 00:06:08.549 "state": "online", 00:06:08.549 "raid_level": "concat", 00:06:08.549 "superblock": true, 00:06:08.549 "num_base_bdevs": 2, 00:06:08.549 "num_base_bdevs_discovered": 2, 00:06:08.549 "num_base_bdevs_operational": 2, 00:06:08.549 "base_bdevs_list": [ 00:06:08.549 { 00:06:08.549 "name": "BaseBdev1", 00:06:08.549 "uuid": "860400f5-ed08-4f17-99d4-701508c08f34", 00:06:08.549 "is_configured": true, 00:06:08.549 "data_offset": 2048, 00:06:08.549 "data_size": 63488 00:06:08.549 }, 00:06:08.549 { 00:06:08.549 "name": "BaseBdev2", 00:06:08.549 "uuid": "7b3d5d04-da63-44ca-bc4a-f78a18deaa7a", 00:06:08.549 "is_configured": true, 00:06:08.549 "data_offset": 2048, 00:06:08.549 "data_size": 63488 00:06:08.549 } 00:06:08.549 ] 00:06:08.549 }' 00:06:08.549 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:08.549 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:08.879 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:08.879 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:08.879 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:08.879 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:08.879 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:08.879 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:08.879 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:08.879 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.879 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:08.879 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:08.879 [2024-10-01 14:31:00.407120] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:08.879 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.879 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:08.879 "name": "Existed_Raid", 00:06:08.879 "aliases": [ 00:06:08.879 "4c8c712c-5f81-442f-973b-f2d610dd4abe" 00:06:08.879 ], 00:06:08.879 "product_name": "Raid Volume", 00:06:08.879 "block_size": 512, 00:06:08.879 "num_blocks": 126976, 00:06:08.879 "uuid": "4c8c712c-5f81-442f-973b-f2d610dd4abe", 00:06:08.879 "assigned_rate_limits": { 00:06:08.879 "rw_ios_per_sec": 0, 00:06:08.879 "rw_mbytes_per_sec": 0, 00:06:08.879 "r_mbytes_per_sec": 0, 00:06:08.879 "w_mbytes_per_sec": 0 00:06:08.879 }, 00:06:08.879 "claimed": false, 00:06:08.879 "zoned": false, 00:06:08.879 "supported_io_types": { 00:06:08.879 "read": true, 00:06:08.879 "write": true, 00:06:08.879 "unmap": true, 00:06:08.879 "flush": true, 00:06:08.879 "reset": true, 00:06:08.879 "nvme_admin": false, 00:06:08.879 "nvme_io": false, 00:06:08.879 "nvme_io_md": false, 00:06:08.879 "write_zeroes": true, 00:06:08.879 "zcopy": false, 00:06:08.879 "get_zone_info": false, 00:06:08.879 "zone_management": false, 00:06:08.879 "zone_append": false, 00:06:08.879 "compare": false, 00:06:08.879 "compare_and_write": false, 00:06:08.879 "abort": false, 00:06:08.879 "seek_hole": false, 00:06:08.879 "seek_data": false, 00:06:08.879 "copy": false, 00:06:08.879 "nvme_iov_md": false 00:06:08.879 }, 00:06:08.880 "memory_domains": [ 00:06:08.880 { 00:06:08.880 "dma_device_id": "system", 00:06:08.880 "dma_device_type": 1 00:06:08.880 }, 00:06:08.880 { 00:06:08.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.880 "dma_device_type": 2 00:06:08.880 }, 00:06:08.880 { 00:06:08.880 "dma_device_id": "system", 00:06:08.880 "dma_device_type": 1 00:06:08.880 }, 00:06:08.880 { 00:06:08.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.880 "dma_device_type": 2 00:06:08.880 } 00:06:08.880 ], 00:06:08.880 "driver_specific": { 00:06:08.880 "raid": { 00:06:08.880 "uuid": "4c8c712c-5f81-442f-973b-f2d610dd4abe", 00:06:08.880 "strip_size_kb": 64, 00:06:08.880 "state": "online", 00:06:08.880 "raid_level": "concat", 00:06:08.880 "superblock": true, 00:06:08.880 "num_base_bdevs": 2, 00:06:08.880 "num_base_bdevs_discovered": 2, 00:06:08.880 "num_base_bdevs_operational": 2, 00:06:08.880 "base_bdevs_list": [ 00:06:08.880 { 00:06:08.880 "name": "BaseBdev1", 00:06:08.880 "uuid": "860400f5-ed08-4f17-99d4-701508c08f34", 00:06:08.880 "is_configured": true, 00:06:08.880 "data_offset": 2048, 00:06:08.880 "data_size": 63488 00:06:08.880 }, 00:06:08.880 { 00:06:08.880 "name": "BaseBdev2", 00:06:08.880 "uuid": "7b3d5d04-da63-44ca-bc4a-f78a18deaa7a", 00:06:08.880 "is_configured": true, 00:06:08.880 "data_offset": 2048, 00:06:08.880 "data_size": 63488 00:06:08.880 } 00:06:08.880 ] 00:06:08.880 } 00:06:08.880 } 00:06:08.880 }' 00:06:08.880 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:08.880 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:08.880 BaseBdev2' 00:06:08.880 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:08.880 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:08.880 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:08.880 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:08.880 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.880 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:08.880 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:08.880 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.145 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:09.145 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:09.145 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:09.145 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:09.145 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:09.146 [2024-10-01 14:31:00.606892] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:09.146 [2024-10-01 14:31:00.606952] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:09.146 [2024-10-01 14:31:00.607014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:09.146 "name": "Existed_Raid", 00:06:09.146 "uuid": "4c8c712c-5f81-442f-973b-f2d610dd4abe", 00:06:09.146 "strip_size_kb": 64, 00:06:09.146 "state": "offline", 00:06:09.146 "raid_level": "concat", 00:06:09.146 "superblock": true, 00:06:09.146 "num_base_bdevs": 2, 00:06:09.146 "num_base_bdevs_discovered": 1, 00:06:09.146 "num_base_bdevs_operational": 1, 00:06:09.146 "base_bdevs_list": [ 00:06:09.146 { 00:06:09.146 "name": null, 00:06:09.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:09.146 "is_configured": false, 00:06:09.146 "data_offset": 0, 00:06:09.146 "data_size": 63488 00:06:09.146 }, 00:06:09.146 { 00:06:09.146 "name": "BaseBdev2", 00:06:09.146 "uuid": "7b3d5d04-da63-44ca-bc4a-f78a18deaa7a", 00:06:09.146 "is_configured": true, 00:06:09.146 "data_offset": 2048, 00:06:09.146 "data_size": 63488 00:06:09.146 } 00:06:09.146 ] 00:06:09.146 }' 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:09.146 14:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:09.408 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:09.408 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:09.408 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:09.408 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:09.408 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.408 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:09.668 [2024-10-01 14:31:01.127851] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:09.668 [2024-10-01 14:31:01.127929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60767 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 60767 ']' 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 60767 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60767 00:06:09.668 killing process with pid 60767 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60767' 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 60767 00:06:09.668 14:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 60767 00:06:09.668 [2024-10-01 14:31:01.266606] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:09.668 [2024-10-01 14:31:01.278769] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:10.610 14:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:10.610 00:06:10.610 real 0m4.521s 00:06:10.610 user 0m6.461s 00:06:10.610 sys 0m0.716s 00:06:10.610 ************************************ 00:06:10.610 END TEST raid_state_function_test_sb 00:06:10.610 14:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.610 14:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:10.610 ************************************ 00:06:10.610 14:31:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:06:10.610 14:31:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:10.610 14:31:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.610 14:31:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:10.610 ************************************ 00:06:10.610 START TEST raid_superblock_test 00:06:10.610 ************************************ 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61008 00:06:10.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61008 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61008 ']' 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.610 14:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.872 [2024-10-01 14:31:02.365627] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:10.872 [2024-10-01 14:31:02.365821] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61008 ] 00:06:10.872 [2024-10-01 14:31:02.520037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.133 [2024-10-01 14:31:02.791193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.392 [2024-10-01 14:31:02.963861] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:11.392 [2024-10-01 14:31:02.963939] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:11.652 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.652 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:11.652 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:11.652 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:11.652 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:11.652 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:11.652 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:11.652 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:11.652 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:11.652 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:11.652 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:11.652 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.652 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.913 malloc1 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.913 [2024-10-01 14:31:03.348621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:11.913 [2024-10-01 14:31:03.348740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:11.913 [2024-10-01 14:31:03.348772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:11.913 [2024-10-01 14:31:03.348788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:11.913 [2024-10-01 14:31:03.351425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:11.913 [2024-10-01 14:31:03.351485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:11.913 pt1 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.913 malloc2 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.913 [2024-10-01 14:31:03.409016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:11.913 [2024-10-01 14:31:03.409112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:11.913 [2024-10-01 14:31:03.409142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:11.913 [2024-10-01 14:31:03.409153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:11.913 [2024-10-01 14:31:03.411799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:11.913 [2024-10-01 14:31:03.411853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:11.913 pt2 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.913 [2024-10-01 14:31:03.421122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:11.913 [2024-10-01 14:31:03.423555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:11.913 [2024-10-01 14:31:03.423793] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:11.913 [2024-10-01 14:31:03.423810] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:11.913 [2024-10-01 14:31:03.424145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:11.913 [2024-10-01 14:31:03.424314] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:11.913 [2024-10-01 14:31:03.424326] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:06:11.913 [2024-10-01 14:31:03.424506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.913 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:11.913 "name": "raid_bdev1", 00:06:11.913 "uuid": "cd0a28d4-af2f-4b58-bdaf-76e4b0c6c0ea", 00:06:11.913 "strip_size_kb": 64, 00:06:11.913 "state": "online", 00:06:11.913 "raid_level": "concat", 00:06:11.913 "superblock": true, 00:06:11.914 "num_base_bdevs": 2, 00:06:11.914 "num_base_bdevs_discovered": 2, 00:06:11.914 "num_base_bdevs_operational": 2, 00:06:11.914 "base_bdevs_list": [ 00:06:11.914 { 00:06:11.914 "name": "pt1", 00:06:11.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:11.914 "is_configured": true, 00:06:11.914 "data_offset": 2048, 00:06:11.914 "data_size": 63488 00:06:11.914 }, 00:06:11.914 { 00:06:11.914 "name": "pt2", 00:06:11.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:11.914 "is_configured": true, 00:06:11.914 "data_offset": 2048, 00:06:11.914 "data_size": 63488 00:06:11.914 } 00:06:11.914 ] 00:06:11.914 }' 00:06:11.914 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:11.914 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:12.175 [2024-10-01 14:31:03.757449] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:12.175 "name": "raid_bdev1", 00:06:12.175 "aliases": [ 00:06:12.175 "cd0a28d4-af2f-4b58-bdaf-76e4b0c6c0ea" 00:06:12.175 ], 00:06:12.175 "product_name": "Raid Volume", 00:06:12.175 "block_size": 512, 00:06:12.175 "num_blocks": 126976, 00:06:12.175 "uuid": "cd0a28d4-af2f-4b58-bdaf-76e4b0c6c0ea", 00:06:12.175 "assigned_rate_limits": { 00:06:12.175 "rw_ios_per_sec": 0, 00:06:12.175 "rw_mbytes_per_sec": 0, 00:06:12.175 "r_mbytes_per_sec": 0, 00:06:12.175 "w_mbytes_per_sec": 0 00:06:12.175 }, 00:06:12.175 "claimed": false, 00:06:12.175 "zoned": false, 00:06:12.175 "supported_io_types": { 00:06:12.175 "read": true, 00:06:12.175 "write": true, 00:06:12.175 "unmap": true, 00:06:12.175 "flush": true, 00:06:12.175 "reset": true, 00:06:12.175 "nvme_admin": false, 00:06:12.175 "nvme_io": false, 00:06:12.175 "nvme_io_md": false, 00:06:12.175 "write_zeroes": true, 00:06:12.175 "zcopy": false, 00:06:12.175 "get_zone_info": false, 00:06:12.175 "zone_management": false, 00:06:12.175 "zone_append": false, 00:06:12.175 "compare": false, 00:06:12.175 "compare_and_write": false, 00:06:12.175 "abort": false, 00:06:12.175 "seek_hole": false, 00:06:12.175 "seek_data": false, 00:06:12.175 "copy": false, 00:06:12.175 "nvme_iov_md": false 00:06:12.175 }, 00:06:12.175 "memory_domains": [ 00:06:12.175 { 00:06:12.175 "dma_device_id": "system", 00:06:12.175 "dma_device_type": 1 00:06:12.175 }, 00:06:12.175 { 00:06:12.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.175 "dma_device_type": 2 00:06:12.175 }, 00:06:12.175 { 00:06:12.175 "dma_device_id": "system", 00:06:12.175 "dma_device_type": 1 00:06:12.175 }, 00:06:12.175 { 00:06:12.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.175 "dma_device_type": 2 00:06:12.175 } 00:06:12.175 ], 00:06:12.175 "driver_specific": { 00:06:12.175 "raid": { 00:06:12.175 "uuid": "cd0a28d4-af2f-4b58-bdaf-76e4b0c6c0ea", 00:06:12.175 "strip_size_kb": 64, 00:06:12.175 "state": "online", 00:06:12.175 "raid_level": "concat", 00:06:12.175 "superblock": true, 00:06:12.175 "num_base_bdevs": 2, 00:06:12.175 "num_base_bdevs_discovered": 2, 00:06:12.175 "num_base_bdevs_operational": 2, 00:06:12.175 "base_bdevs_list": [ 00:06:12.175 { 00:06:12.175 "name": "pt1", 00:06:12.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:12.175 "is_configured": true, 00:06:12.175 "data_offset": 2048, 00:06:12.175 "data_size": 63488 00:06:12.175 }, 00:06:12.175 { 00:06:12.175 "name": "pt2", 00:06:12.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:12.175 "is_configured": true, 00:06:12.175 "data_offset": 2048, 00:06:12.175 "data_size": 63488 00:06:12.175 } 00:06:12.175 ] 00:06:12.175 } 00:06:12.175 } 00:06:12.175 }' 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:12.175 pt2' 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.175 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.437 [2024-10-01 14:31:03.929465] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cd0a28d4-af2f-4b58-bdaf-76e4b0c6c0ea 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cd0a28d4-af2f-4b58-bdaf-76e4b0c6c0ea ']' 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.437 [2024-10-01 14:31:03.969168] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:12.437 [2024-10-01 14:31:03.969208] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:12.437 [2024-10-01 14:31:03.969307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:12.437 [2024-10-01 14:31:03.969364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:12.437 [2024-10-01 14:31:03.969380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:12.437 14:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.437 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.437 [2024-10-01 14:31:04.077213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:12.437 [2024-10-01 14:31:04.079486] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:12.437 [2024-10-01 14:31:04.079581] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:12.437 [2024-10-01 14:31:04.079650] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:12.437 [2024-10-01 14:31:04.079666] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:12.437 [2024-10-01 14:31:04.079679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:06:12.437 request: 00:06:12.437 { 00:06:12.437 "name": "raid_bdev1", 00:06:12.437 "raid_level": "concat", 00:06:12.437 "base_bdevs": [ 00:06:12.437 "malloc1", 00:06:12.437 "malloc2" 00:06:12.437 ], 00:06:12.437 "strip_size_kb": 64, 00:06:12.437 "superblock": false, 00:06:12.437 "method": "bdev_raid_create", 00:06:12.437 "req_id": 1 00:06:12.437 } 00:06:12.437 Got JSON-RPC error response 00:06:12.437 response: 00:06:12.437 { 00:06:12.437 "code": -17, 00:06:12.437 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:12.438 } 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.438 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.781 [2024-10-01 14:31:04.125211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:12.781 [2024-10-01 14:31:04.125305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:12.781 [2024-10-01 14:31:04.125329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:12.781 [2024-10-01 14:31:04.125342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:12.781 [2024-10-01 14:31:04.127981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:12.781 [2024-10-01 14:31:04.128041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:12.781 [2024-10-01 14:31:04.128148] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:12.781 [2024-10-01 14:31:04.128212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:12.781 pt1 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:12.781 "name": "raid_bdev1", 00:06:12.781 "uuid": "cd0a28d4-af2f-4b58-bdaf-76e4b0c6c0ea", 00:06:12.781 "strip_size_kb": 64, 00:06:12.781 "state": "configuring", 00:06:12.781 "raid_level": "concat", 00:06:12.781 "superblock": true, 00:06:12.781 "num_base_bdevs": 2, 00:06:12.781 "num_base_bdevs_discovered": 1, 00:06:12.781 "num_base_bdevs_operational": 2, 00:06:12.781 "base_bdevs_list": [ 00:06:12.781 { 00:06:12.781 "name": "pt1", 00:06:12.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:12.781 "is_configured": true, 00:06:12.781 "data_offset": 2048, 00:06:12.781 "data_size": 63488 00:06:12.781 }, 00:06:12.781 { 00:06:12.781 "name": null, 00:06:12.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:12.781 "is_configured": false, 00:06:12.781 "data_offset": 2048, 00:06:12.781 "data_size": 63488 00:06:12.781 } 00:06:12.781 ] 00:06:12.781 }' 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:12.781 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.046 [2024-10-01 14:31:04.509270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:13.046 [2024-10-01 14:31:04.509367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:13.046 [2024-10-01 14:31:04.509390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:06:13.046 [2024-10-01 14:31:04.509402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:13.046 [2024-10-01 14:31:04.510002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:13.046 [2024-10-01 14:31:04.510027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:13.046 [2024-10-01 14:31:04.510121] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:13.046 [2024-10-01 14:31:04.510146] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:13.046 [2024-10-01 14:31:04.510269] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:13.046 [2024-10-01 14:31:04.510281] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:13.046 [2024-10-01 14:31:04.510553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:13.046 [2024-10-01 14:31:04.510692] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:13.046 [2024-10-01 14:31:04.510701] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:13.046 [2024-10-01 14:31:04.510874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:13.046 pt2 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:13.046 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:13.047 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:13.047 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:13.047 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:13.047 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:13.047 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:13.047 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.047 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.047 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.047 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:13.047 "name": "raid_bdev1", 00:06:13.047 "uuid": "cd0a28d4-af2f-4b58-bdaf-76e4b0c6c0ea", 00:06:13.047 "strip_size_kb": 64, 00:06:13.047 "state": "online", 00:06:13.047 "raid_level": "concat", 00:06:13.047 "superblock": true, 00:06:13.047 "num_base_bdevs": 2, 00:06:13.047 "num_base_bdevs_discovered": 2, 00:06:13.047 "num_base_bdevs_operational": 2, 00:06:13.047 "base_bdevs_list": [ 00:06:13.047 { 00:06:13.047 "name": "pt1", 00:06:13.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:13.047 "is_configured": true, 00:06:13.047 "data_offset": 2048, 00:06:13.047 "data_size": 63488 00:06:13.047 }, 00:06:13.047 { 00:06:13.047 "name": "pt2", 00:06:13.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:13.047 "is_configured": true, 00:06:13.047 "data_offset": 2048, 00:06:13.047 "data_size": 63488 00:06:13.047 } 00:06:13.047 ] 00:06:13.047 }' 00:06:13.047 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:13.047 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:13.310 [2024-10-01 14:31:04.837682] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:13.310 "name": "raid_bdev1", 00:06:13.310 "aliases": [ 00:06:13.310 "cd0a28d4-af2f-4b58-bdaf-76e4b0c6c0ea" 00:06:13.310 ], 00:06:13.310 "product_name": "Raid Volume", 00:06:13.310 "block_size": 512, 00:06:13.310 "num_blocks": 126976, 00:06:13.310 "uuid": "cd0a28d4-af2f-4b58-bdaf-76e4b0c6c0ea", 00:06:13.310 "assigned_rate_limits": { 00:06:13.310 "rw_ios_per_sec": 0, 00:06:13.310 "rw_mbytes_per_sec": 0, 00:06:13.310 "r_mbytes_per_sec": 0, 00:06:13.310 "w_mbytes_per_sec": 0 00:06:13.310 }, 00:06:13.310 "claimed": false, 00:06:13.310 "zoned": false, 00:06:13.310 "supported_io_types": { 00:06:13.310 "read": true, 00:06:13.310 "write": true, 00:06:13.310 "unmap": true, 00:06:13.310 "flush": true, 00:06:13.310 "reset": true, 00:06:13.310 "nvme_admin": false, 00:06:13.310 "nvme_io": false, 00:06:13.310 "nvme_io_md": false, 00:06:13.310 "write_zeroes": true, 00:06:13.310 "zcopy": false, 00:06:13.310 "get_zone_info": false, 00:06:13.310 "zone_management": false, 00:06:13.310 "zone_append": false, 00:06:13.310 "compare": false, 00:06:13.310 "compare_and_write": false, 00:06:13.310 "abort": false, 00:06:13.310 "seek_hole": false, 00:06:13.310 "seek_data": false, 00:06:13.310 "copy": false, 00:06:13.310 "nvme_iov_md": false 00:06:13.310 }, 00:06:13.310 "memory_domains": [ 00:06:13.310 { 00:06:13.310 "dma_device_id": "system", 00:06:13.310 "dma_device_type": 1 00:06:13.310 }, 00:06:13.310 { 00:06:13.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.310 "dma_device_type": 2 00:06:13.310 }, 00:06:13.310 { 00:06:13.310 "dma_device_id": "system", 00:06:13.310 "dma_device_type": 1 00:06:13.310 }, 00:06:13.310 { 00:06:13.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.310 "dma_device_type": 2 00:06:13.310 } 00:06:13.310 ], 00:06:13.310 "driver_specific": { 00:06:13.310 "raid": { 00:06:13.310 "uuid": "cd0a28d4-af2f-4b58-bdaf-76e4b0c6c0ea", 00:06:13.310 "strip_size_kb": 64, 00:06:13.310 "state": "online", 00:06:13.310 "raid_level": "concat", 00:06:13.310 "superblock": true, 00:06:13.310 "num_base_bdevs": 2, 00:06:13.310 "num_base_bdevs_discovered": 2, 00:06:13.310 "num_base_bdevs_operational": 2, 00:06:13.310 "base_bdevs_list": [ 00:06:13.310 { 00:06:13.310 "name": "pt1", 00:06:13.310 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:13.310 "is_configured": true, 00:06:13.310 "data_offset": 2048, 00:06:13.310 "data_size": 63488 00:06:13.310 }, 00:06:13.310 { 00:06:13.310 "name": "pt2", 00:06:13.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:13.310 "is_configured": true, 00:06:13.310 "data_offset": 2048, 00:06:13.310 "data_size": 63488 00:06:13.310 } 00:06:13.310 ] 00:06:13.310 } 00:06:13.310 } 00:06:13.310 }' 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:13.310 pt2' 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.310 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.573 14:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:13.573 [2024-10-01 14:31:05.013722] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cd0a28d4-af2f-4b58-bdaf-76e4b0c6c0ea '!=' cd0a28d4-af2f-4b58-bdaf-76e4b0c6c0ea ']' 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61008 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61008 ']' 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61008 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61008 00:06:13.573 killing process with pid 61008 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61008' 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61008 00:06:13.573 [2024-10-01 14:31:05.067117] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:13.573 14:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61008 00:06:13.573 [2024-10-01 14:31:05.067238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:13.573 [2024-10-01 14:31:05.067297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:13.573 [2024-10-01 14:31:05.067310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:13.573 [2024-10-01 14:31:05.215147] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:14.515 14:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:14.515 00:06:14.515 real 0m3.860s 00:06:14.515 user 0m5.241s 00:06:14.515 sys 0m0.683s 00:06:14.515 14:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.515 14:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.515 ************************************ 00:06:14.515 END TEST raid_superblock_test 00:06:14.515 ************************************ 00:06:14.776 14:31:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:06:14.776 14:31:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:14.776 14:31:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.776 14:31:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:14.776 ************************************ 00:06:14.776 START TEST raid_read_error_test 00:06:14.776 ************************************ 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:14.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DCSpTcyF6o 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61214 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61214 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61214 ']' 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.776 14:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:14.776 [2024-10-01 14:31:06.311146] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:14.776 [2024-10-01 14:31:06.311316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61214 ] 00:06:15.035 [2024-10-01 14:31:06.462969] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.294 [2024-10-01 14:31:06.726822] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.294 [2024-10-01 14:31:06.895671] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:15.294 [2024-10-01 14:31:06.895764] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:15.552 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.552 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:06:15.552 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:15.552 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:15.552 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.552 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.815 BaseBdev1_malloc 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.815 true 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.815 [2024-10-01 14:31:07.263307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:15.815 [2024-10-01 14:31:07.263408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:15.815 [2024-10-01 14:31:07.263433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:15.815 [2024-10-01 14:31:07.263447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:15.815 [2024-10-01 14:31:07.266183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:15.815 [2024-10-01 14:31:07.266421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:15.815 BaseBdev1 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.815 BaseBdev2_malloc 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.815 true 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.815 [2024-10-01 14:31:07.329220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:15.815 [2024-10-01 14:31:07.329312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:15.815 [2024-10-01 14:31:07.329336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:15.815 [2024-10-01 14:31:07.329349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:15.815 [2024-10-01 14:31:07.332094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:15.815 [2024-10-01 14:31:07.332160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:15.815 BaseBdev2 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.815 [2024-10-01 14:31:07.341350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:15.815 [2024-10-01 14:31:07.343864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:15.815 [2024-10-01 14:31:07.344122] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:15.815 [2024-10-01 14:31:07.344138] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:15.815 [2024-10-01 14:31:07.344471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:15.815 [2024-10-01 14:31:07.344654] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:15.815 [2024-10-01 14:31:07.344663] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:15.815 [2024-10-01 14:31:07.345094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:15.815 "name": "raid_bdev1", 00:06:15.815 "uuid": "16d06b37-67f0-434e-9954-9ba3635eaf81", 00:06:15.815 "strip_size_kb": 64, 00:06:15.815 "state": "online", 00:06:15.815 "raid_level": "concat", 00:06:15.815 "superblock": true, 00:06:15.815 "num_base_bdevs": 2, 00:06:15.815 "num_base_bdevs_discovered": 2, 00:06:15.815 "num_base_bdevs_operational": 2, 00:06:15.815 "base_bdevs_list": [ 00:06:15.815 { 00:06:15.815 "name": "BaseBdev1", 00:06:15.815 "uuid": "e1c6ef93-ad3d-5251-abbb-07b3b49117cd", 00:06:15.815 "is_configured": true, 00:06:15.815 "data_offset": 2048, 00:06:15.815 "data_size": 63488 00:06:15.815 }, 00:06:15.815 { 00:06:15.815 "name": "BaseBdev2", 00:06:15.815 "uuid": "328b9bc7-bf3e-5edb-8a00-60214670d6bd", 00:06:15.815 "is_configured": true, 00:06:15.815 "data_offset": 2048, 00:06:15.815 "data_size": 63488 00:06:15.815 } 00:06:15.815 ] 00:06:15.815 }' 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:15.815 14:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.078 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:16.078 14:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:16.340 [2024-10-01 14:31:07.782612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:17.316 "name": "raid_bdev1", 00:06:17.316 "uuid": "16d06b37-67f0-434e-9954-9ba3635eaf81", 00:06:17.316 "strip_size_kb": 64, 00:06:17.316 "state": "online", 00:06:17.316 "raid_level": "concat", 00:06:17.316 "superblock": true, 00:06:17.316 "num_base_bdevs": 2, 00:06:17.316 "num_base_bdevs_discovered": 2, 00:06:17.316 "num_base_bdevs_operational": 2, 00:06:17.316 "base_bdevs_list": [ 00:06:17.316 { 00:06:17.316 "name": "BaseBdev1", 00:06:17.316 "uuid": "e1c6ef93-ad3d-5251-abbb-07b3b49117cd", 00:06:17.316 "is_configured": true, 00:06:17.316 "data_offset": 2048, 00:06:17.316 "data_size": 63488 00:06:17.316 }, 00:06:17.316 { 00:06:17.316 "name": "BaseBdev2", 00:06:17.316 "uuid": "328b9bc7-bf3e-5edb-8a00-60214670d6bd", 00:06:17.316 "is_configured": true, 00:06:17.316 "data_offset": 2048, 00:06:17.316 "data_size": 63488 00:06:17.316 } 00:06:17.316 ] 00:06:17.316 }' 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:17.316 14:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 [2024-10-01 14:31:09.106471] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:17.576 [2024-10-01 14:31:09.106521] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:17.576 [2024-10-01 14:31:09.109900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:17.576 [2024-10-01 14:31:09.110065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:17.576 [2024-10-01 14:31:09.110168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:17.576 [2024-10-01 14:31:09.110244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:17.576 { 00:06:17.576 "results": [ 00:06:17.576 { 00:06:17.576 "job": "raid_bdev1", 00:06:17.576 "core_mask": "0x1", 00:06:17.576 "workload": "randrw", 00:06:17.576 "percentage": 50, 00:06:17.576 "status": "finished", 00:06:17.576 "queue_depth": 1, 00:06:17.576 "io_size": 131072, 00:06:17.576 "runtime": 1.32172, 00:06:17.576 "iops": 12495.838755560935, 00:06:17.576 "mibps": 1561.979844445117, 00:06:17.576 "io_failed": 1, 00:06:17.576 "io_timeout": 0, 00:06:17.576 "avg_latency_us": 111.25423000079172, 00:06:17.576 "min_latency_us": 33.28, 00:06:17.576 "max_latency_us": 1726.6215384615384 00:06:17.576 } 00:06:17.576 ], 00:06:17.576 "core_count": 1 00:06:17.576 } 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61214 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61214 ']' 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61214 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61214 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.576 killing process with pid 61214 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61214' 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61214 00:06:17.576 [2024-10-01 14:31:09.145189] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:17.576 14:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61214 00:06:17.576 [2024-10-01 14:31:09.242626] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:18.958 14:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DCSpTcyF6o 00:06:18.958 14:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:18.958 14:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:18.958 14:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:06:18.958 14:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:06:18.958 14:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:18.958 14:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:18.959 14:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:06:18.959 00:06:18.959 real 0m4.009s 00:06:18.959 user 0m4.732s 00:06:18.959 sys 0m0.522s 00:06:18.959 ************************************ 00:06:18.959 END TEST raid_read_error_test 00:06:18.959 ************************************ 00:06:18.959 14:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.959 14:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.959 14:31:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:06:18.959 14:31:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:18.959 14:31:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.959 14:31:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:18.959 ************************************ 00:06:18.959 START TEST raid_write_error_test 00:06:18.959 ************************************ 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FW9IrlQJDy 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61354 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61354 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61354 ']' 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.959 14:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.959 [2024-10-01 14:31:10.403453] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:18.959 [2024-10-01 14:31:10.403865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61354 ] 00:06:18.959 [2024-10-01 14:31:10.569751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.218 [2024-10-01 14:31:10.898179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.475 [2024-10-01 14:31:11.074599] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:19.475 [2024-10-01 14:31:11.074657] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.734 BaseBdev1_malloc 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.734 true 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.734 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.734 [2024-10-01 14:31:11.392999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:19.734 [2024-10-01 14:31:11.393361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:19.734 [2024-10-01 14:31:11.393401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:19.734 [2024-10-01 14:31:11.393415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:19.735 [2024-10-01 14:31:11.396260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:19.735 [2024-10-01 14:31:11.396335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:19.735 BaseBdev1 00:06:19.735 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.735 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:19.735 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:19.735 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.735 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.995 BaseBdev2_malloc 00:06:19.995 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.995 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:19.995 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.995 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.995 true 00:06:19.995 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.995 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:19.995 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.995 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.995 [2024-10-01 14:31:11.468492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:19.995 [2024-10-01 14:31:11.468932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:19.995 [2024-10-01 14:31:11.468978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:19.995 [2024-10-01 14:31:11.468996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:19.995 [2024-10-01 14:31:11.472231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:19.995 [2024-10-01 14:31:11.472491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:19.995 BaseBdev2 00:06:19.995 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.995 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.996 [2024-10-01 14:31:11.480877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:19.996 [2024-10-01 14:31:11.483300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:19.996 [2024-10-01 14:31:11.483765] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:19.996 [2024-10-01 14:31:11.483792] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:19.996 [2024-10-01 14:31:11.484134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:19.996 [2024-10-01 14:31:11.484320] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:19.996 [2024-10-01 14:31:11.484330] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:19.996 [2024-10-01 14:31:11.484536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:19.996 "name": "raid_bdev1", 00:06:19.996 "uuid": "9ef12c4e-8068-4214-b9e4-f0e84f639bb4", 00:06:19.996 "strip_size_kb": 64, 00:06:19.996 "state": "online", 00:06:19.996 "raid_level": "concat", 00:06:19.996 "superblock": true, 00:06:19.996 "num_base_bdevs": 2, 00:06:19.996 "num_base_bdevs_discovered": 2, 00:06:19.996 "num_base_bdevs_operational": 2, 00:06:19.996 "base_bdevs_list": [ 00:06:19.996 { 00:06:19.996 "name": "BaseBdev1", 00:06:19.996 "uuid": "1eb0abcd-171d-5ccf-9534-b1bc757a0509", 00:06:19.996 "is_configured": true, 00:06:19.996 "data_offset": 2048, 00:06:19.996 "data_size": 63488 00:06:19.996 }, 00:06:19.996 { 00:06:19.996 "name": "BaseBdev2", 00:06:19.996 "uuid": "cc8c5b45-f9ba-53e0-b4d6-414a2add131a", 00:06:19.996 "is_configured": true, 00:06:19.996 "data_offset": 2048, 00:06:19.996 "data_size": 63488 00:06:19.996 } 00:06:19.996 ] 00:06:19.996 }' 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:19.996 14:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.258 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:20.258 14:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:20.258 [2024-10-01 14:31:11.914408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.199 14:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.200 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:21.200 "name": "raid_bdev1", 00:06:21.200 "uuid": "9ef12c4e-8068-4214-b9e4-f0e84f639bb4", 00:06:21.200 "strip_size_kb": 64, 00:06:21.200 "state": "online", 00:06:21.200 "raid_level": "concat", 00:06:21.200 "superblock": true, 00:06:21.200 "num_base_bdevs": 2, 00:06:21.200 "num_base_bdevs_discovered": 2, 00:06:21.200 "num_base_bdevs_operational": 2, 00:06:21.200 "base_bdevs_list": [ 00:06:21.200 { 00:06:21.200 "name": "BaseBdev1", 00:06:21.200 "uuid": "1eb0abcd-171d-5ccf-9534-b1bc757a0509", 00:06:21.200 "is_configured": true, 00:06:21.200 "data_offset": 2048, 00:06:21.200 "data_size": 63488 00:06:21.200 }, 00:06:21.200 { 00:06:21.200 "name": "BaseBdev2", 00:06:21.200 "uuid": "cc8c5b45-f9ba-53e0-b4d6-414a2add131a", 00:06:21.200 "is_configured": true, 00:06:21.200 "data_offset": 2048, 00:06:21.200 "data_size": 63488 00:06:21.200 } 00:06:21.200 ] 00:06:21.200 }' 00:06:21.200 14:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:21.200 14:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.773 14:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:21.773 14:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.773 14:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.773 [2024-10-01 14:31:13.162983] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:21.773 [2024-10-01 14:31:13.163244] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:21.773 [2024-10-01 14:31:13.166498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:21.773 [2024-10-01 14:31:13.166719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:21.773 [2024-10-01 14:31:13.166789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:21.773 [2024-10-01 14:31:13.166927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:21.773 { 00:06:21.773 "results": [ 00:06:21.773 { 00:06:21.773 "job": "raid_bdev1", 00:06:21.773 "core_mask": "0x1", 00:06:21.773 "workload": "randrw", 00:06:21.773 "percentage": 50, 00:06:21.773 "status": "finished", 00:06:21.773 "queue_depth": 1, 00:06:21.773 "io_size": 131072, 00:06:21.773 "runtime": 1.246329, 00:06:21.773 "iops": 11826.73274873649, 00:06:21.773 "mibps": 1478.3415935920611, 00:06:21.774 "io_failed": 1, 00:06:21.774 "io_timeout": 0, 00:06:21.774 "avg_latency_us": 117.7768455328675, 00:06:21.774 "min_latency_us": 33.47692307692308, 00:06:21.774 "max_latency_us": 1726.6215384615384 00:06:21.774 } 00:06:21.774 ], 00:06:21.774 "core_count": 1 00:06:21.774 } 00:06:21.774 14:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.774 14:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61354 00:06:21.774 14:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61354 ']' 00:06:21.774 14:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61354 00:06:21.774 14:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:06:21.774 14:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.774 14:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61354 00:06:21.774 killing process with pid 61354 00:06:21.774 14:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.774 14:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.774 14:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61354' 00:06:21.774 14:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61354 00:06:21.774 14:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61354 00:06:21.774 [2024-10-01 14:31:13.204813] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:21.774 [2024-10-01 14:31:13.305161] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:22.714 14:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FW9IrlQJDy 00:06:22.714 14:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:22.714 14:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:22.714 14:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:06:22.714 14:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:06:22.714 14:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:22.714 14:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:22.714 14:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:06:22.714 00:06:22.714 real 0m3.919s 00:06:22.714 user 0m4.557s 00:06:22.714 sys 0m0.559s 00:06:22.714 14:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.714 ************************************ 00:06:22.714 END TEST raid_write_error_test 00:06:22.714 14:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.714 ************************************ 00:06:22.714 14:31:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:22.714 14:31:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:06:22.714 14:31:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:22.714 14:31:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.714 14:31:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:22.714 ************************************ 00:06:22.714 START TEST raid_state_function_test 00:06:22.714 ************************************ 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:22.714 Process raid pid: 61494 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61494 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61494' 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61494 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61494 ']' 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.714 14:31:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:22.714 [2024-10-01 14:31:14.363736] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:22.714 [2024-10-01 14:31:14.363874] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.975 [2024-10-01 14:31:14.514973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.238 [2024-10-01 14:31:14.732146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.238 [2024-10-01 14:31:14.875904] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.238 [2024-10-01 14:31:14.875958] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.833 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.833 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.834 [2024-10-01 14:31:15.262040] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:23.834 [2024-10-01 14:31:15.262102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:23.834 [2024-10-01 14:31:15.262113] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:23.834 [2024-10-01 14:31:15.262122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:23.834 "name": "Existed_Raid", 00:06:23.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:23.834 "strip_size_kb": 0, 00:06:23.834 "state": "configuring", 00:06:23.834 "raid_level": "raid1", 00:06:23.834 "superblock": false, 00:06:23.834 "num_base_bdevs": 2, 00:06:23.834 "num_base_bdevs_discovered": 0, 00:06:23.834 "num_base_bdevs_operational": 2, 00:06:23.834 "base_bdevs_list": [ 00:06:23.834 { 00:06:23.834 "name": "BaseBdev1", 00:06:23.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:23.834 "is_configured": false, 00:06:23.834 "data_offset": 0, 00:06:23.834 "data_size": 0 00:06:23.834 }, 00:06:23.834 { 00:06:23.834 "name": "BaseBdev2", 00:06:23.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:23.834 "is_configured": false, 00:06:23.834 "data_offset": 0, 00:06:23.834 "data_size": 0 00:06:23.834 } 00:06:23.834 ] 00:06:23.834 }' 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:23.834 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.094 [2024-10-01 14:31:15.586047] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:24.094 [2024-10-01 14:31:15.586086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.094 [2024-10-01 14:31:15.594067] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:24.094 [2024-10-01 14:31:15.594110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:24.094 [2024-10-01 14:31:15.594118] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:24.094 [2024-10-01 14:31:15.594129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.094 [2024-10-01 14:31:15.644054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:24.094 BaseBdev1 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.094 [ 00:06:24.094 { 00:06:24.094 "name": "BaseBdev1", 00:06:24.094 "aliases": [ 00:06:24.094 "33a24ee3-0425-402f-b458-cfb73ac0bbde" 00:06:24.094 ], 00:06:24.094 "product_name": "Malloc disk", 00:06:24.094 "block_size": 512, 00:06:24.094 "num_blocks": 65536, 00:06:24.094 "uuid": "33a24ee3-0425-402f-b458-cfb73ac0bbde", 00:06:24.094 "assigned_rate_limits": { 00:06:24.094 "rw_ios_per_sec": 0, 00:06:24.094 "rw_mbytes_per_sec": 0, 00:06:24.094 "r_mbytes_per_sec": 0, 00:06:24.094 "w_mbytes_per_sec": 0 00:06:24.094 }, 00:06:24.094 "claimed": true, 00:06:24.094 "claim_type": "exclusive_write", 00:06:24.094 "zoned": false, 00:06:24.094 "supported_io_types": { 00:06:24.094 "read": true, 00:06:24.094 "write": true, 00:06:24.094 "unmap": true, 00:06:24.094 "flush": true, 00:06:24.094 "reset": true, 00:06:24.094 "nvme_admin": false, 00:06:24.094 "nvme_io": false, 00:06:24.094 "nvme_io_md": false, 00:06:24.094 "write_zeroes": true, 00:06:24.094 "zcopy": true, 00:06:24.094 "get_zone_info": false, 00:06:24.094 "zone_management": false, 00:06:24.094 "zone_append": false, 00:06:24.094 "compare": false, 00:06:24.094 "compare_and_write": false, 00:06:24.094 "abort": true, 00:06:24.094 "seek_hole": false, 00:06:24.094 "seek_data": false, 00:06:24.094 "copy": true, 00:06:24.094 "nvme_iov_md": false 00:06:24.094 }, 00:06:24.094 "memory_domains": [ 00:06:24.094 { 00:06:24.094 "dma_device_id": "system", 00:06:24.094 "dma_device_type": 1 00:06:24.094 }, 00:06:24.094 { 00:06:24.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.094 "dma_device_type": 2 00:06:24.094 } 00:06:24.094 ], 00:06:24.094 "driver_specific": {} 00:06:24.094 } 00:06:24.094 ] 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.094 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:24.094 "name": "Existed_Raid", 00:06:24.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:24.094 "strip_size_kb": 0, 00:06:24.094 "state": "configuring", 00:06:24.094 "raid_level": "raid1", 00:06:24.094 "superblock": false, 00:06:24.094 "num_base_bdevs": 2, 00:06:24.094 "num_base_bdevs_discovered": 1, 00:06:24.094 "num_base_bdevs_operational": 2, 00:06:24.094 "base_bdevs_list": [ 00:06:24.094 { 00:06:24.094 "name": "BaseBdev1", 00:06:24.094 "uuid": "33a24ee3-0425-402f-b458-cfb73ac0bbde", 00:06:24.094 "is_configured": true, 00:06:24.094 "data_offset": 0, 00:06:24.094 "data_size": 65536 00:06:24.094 }, 00:06:24.094 { 00:06:24.094 "name": "BaseBdev2", 00:06:24.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:24.094 "is_configured": false, 00:06:24.094 "data_offset": 0, 00:06:24.094 "data_size": 0 00:06:24.094 } 00:06:24.094 ] 00:06:24.095 }' 00:06:24.095 14:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:24.095 14:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.358 [2024-10-01 14:31:16.012193] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:24.358 [2024-10-01 14:31:16.012384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.358 [2024-10-01 14:31:16.020220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:24.358 [2024-10-01 14:31:16.022265] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:24.358 [2024-10-01 14:31:16.022399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:24.358 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.620 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.620 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:24.620 "name": "Existed_Raid", 00:06:24.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:24.620 "strip_size_kb": 0, 00:06:24.620 "state": "configuring", 00:06:24.620 "raid_level": "raid1", 00:06:24.620 "superblock": false, 00:06:24.620 "num_base_bdevs": 2, 00:06:24.620 "num_base_bdevs_discovered": 1, 00:06:24.620 "num_base_bdevs_operational": 2, 00:06:24.620 "base_bdevs_list": [ 00:06:24.620 { 00:06:24.620 "name": "BaseBdev1", 00:06:24.620 "uuid": "33a24ee3-0425-402f-b458-cfb73ac0bbde", 00:06:24.620 "is_configured": true, 00:06:24.620 "data_offset": 0, 00:06:24.620 "data_size": 65536 00:06:24.620 }, 00:06:24.620 { 00:06:24.620 "name": "BaseBdev2", 00:06:24.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:24.620 "is_configured": false, 00:06:24.620 "data_offset": 0, 00:06:24.620 "data_size": 0 00:06:24.620 } 00:06:24.620 ] 00:06:24.620 }' 00:06:24.620 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:24.620 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.881 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:24.881 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.881 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.881 [2024-10-01 14:31:16.459813] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:24.881 [2024-10-01 14:31:16.459862] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:24.882 [2024-10-01 14:31:16.459873] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:24.882 [2024-10-01 14:31:16.460129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:24.882 [2024-10-01 14:31:16.460271] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:24.882 [2024-10-01 14:31:16.460282] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:24.882 [2024-10-01 14:31:16.460514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.882 BaseBdev2 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.882 [ 00:06:24.882 { 00:06:24.882 "name": "BaseBdev2", 00:06:24.882 "aliases": [ 00:06:24.882 "9b990de6-603d-4b09-8470-682d02fbf602" 00:06:24.882 ], 00:06:24.882 "product_name": "Malloc disk", 00:06:24.882 "block_size": 512, 00:06:24.882 "num_blocks": 65536, 00:06:24.882 "uuid": "9b990de6-603d-4b09-8470-682d02fbf602", 00:06:24.882 "assigned_rate_limits": { 00:06:24.882 "rw_ios_per_sec": 0, 00:06:24.882 "rw_mbytes_per_sec": 0, 00:06:24.882 "r_mbytes_per_sec": 0, 00:06:24.882 "w_mbytes_per_sec": 0 00:06:24.882 }, 00:06:24.882 "claimed": true, 00:06:24.882 "claim_type": "exclusive_write", 00:06:24.882 "zoned": false, 00:06:24.882 "supported_io_types": { 00:06:24.882 "read": true, 00:06:24.882 "write": true, 00:06:24.882 "unmap": true, 00:06:24.882 "flush": true, 00:06:24.882 "reset": true, 00:06:24.882 "nvme_admin": false, 00:06:24.882 "nvme_io": false, 00:06:24.882 "nvme_io_md": false, 00:06:24.882 "write_zeroes": true, 00:06:24.882 "zcopy": true, 00:06:24.882 "get_zone_info": false, 00:06:24.882 "zone_management": false, 00:06:24.882 "zone_append": false, 00:06:24.882 "compare": false, 00:06:24.882 "compare_and_write": false, 00:06:24.882 "abort": true, 00:06:24.882 "seek_hole": false, 00:06:24.882 "seek_data": false, 00:06:24.882 "copy": true, 00:06:24.882 "nvme_iov_md": false 00:06:24.882 }, 00:06:24.882 "memory_domains": [ 00:06:24.882 { 00:06:24.882 "dma_device_id": "system", 00:06:24.882 "dma_device_type": 1 00:06:24.882 }, 00:06:24.882 { 00:06:24.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.882 "dma_device_type": 2 00:06:24.882 } 00:06:24.882 ], 00:06:24.882 "driver_specific": {} 00:06:24.882 } 00:06:24.882 ] 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:24.882 "name": "Existed_Raid", 00:06:24.882 "uuid": "b0f5a6f0-1b7b-4a91-a41d-9adcfae13ad5", 00:06:24.882 "strip_size_kb": 0, 00:06:24.882 "state": "online", 00:06:24.882 "raid_level": "raid1", 00:06:24.882 "superblock": false, 00:06:24.882 "num_base_bdevs": 2, 00:06:24.882 "num_base_bdevs_discovered": 2, 00:06:24.882 "num_base_bdevs_operational": 2, 00:06:24.882 "base_bdevs_list": [ 00:06:24.882 { 00:06:24.882 "name": "BaseBdev1", 00:06:24.882 "uuid": "33a24ee3-0425-402f-b458-cfb73ac0bbde", 00:06:24.882 "is_configured": true, 00:06:24.882 "data_offset": 0, 00:06:24.882 "data_size": 65536 00:06:24.882 }, 00:06:24.882 { 00:06:24.882 "name": "BaseBdev2", 00:06:24.882 "uuid": "9b990de6-603d-4b09-8470-682d02fbf602", 00:06:24.882 "is_configured": true, 00:06:24.882 "data_offset": 0, 00:06:24.882 "data_size": 65536 00:06:24.882 } 00:06:24.882 ] 00:06:24.882 }' 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:24.882 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.142 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:25.142 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:25.142 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:25.142 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:25.142 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:25.142 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:25.142 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:25.142 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:25.142 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.142 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.142 [2024-10-01 14:31:16.816256] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:25.403 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.403 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:25.403 "name": "Existed_Raid", 00:06:25.403 "aliases": [ 00:06:25.403 "b0f5a6f0-1b7b-4a91-a41d-9adcfae13ad5" 00:06:25.403 ], 00:06:25.403 "product_name": "Raid Volume", 00:06:25.403 "block_size": 512, 00:06:25.403 "num_blocks": 65536, 00:06:25.403 "uuid": "b0f5a6f0-1b7b-4a91-a41d-9adcfae13ad5", 00:06:25.403 "assigned_rate_limits": { 00:06:25.403 "rw_ios_per_sec": 0, 00:06:25.403 "rw_mbytes_per_sec": 0, 00:06:25.403 "r_mbytes_per_sec": 0, 00:06:25.403 "w_mbytes_per_sec": 0 00:06:25.403 }, 00:06:25.403 "claimed": false, 00:06:25.403 "zoned": false, 00:06:25.403 "supported_io_types": { 00:06:25.403 "read": true, 00:06:25.403 "write": true, 00:06:25.403 "unmap": false, 00:06:25.403 "flush": false, 00:06:25.403 "reset": true, 00:06:25.403 "nvme_admin": false, 00:06:25.403 "nvme_io": false, 00:06:25.403 "nvme_io_md": false, 00:06:25.403 "write_zeroes": true, 00:06:25.403 "zcopy": false, 00:06:25.403 "get_zone_info": false, 00:06:25.403 "zone_management": false, 00:06:25.403 "zone_append": false, 00:06:25.403 "compare": false, 00:06:25.403 "compare_and_write": false, 00:06:25.403 "abort": false, 00:06:25.403 "seek_hole": false, 00:06:25.403 "seek_data": false, 00:06:25.403 "copy": false, 00:06:25.403 "nvme_iov_md": false 00:06:25.403 }, 00:06:25.403 "memory_domains": [ 00:06:25.403 { 00:06:25.403 "dma_device_id": "system", 00:06:25.403 "dma_device_type": 1 00:06:25.403 }, 00:06:25.403 { 00:06:25.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.403 "dma_device_type": 2 00:06:25.403 }, 00:06:25.403 { 00:06:25.403 "dma_device_id": "system", 00:06:25.403 "dma_device_type": 1 00:06:25.403 }, 00:06:25.403 { 00:06:25.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.403 "dma_device_type": 2 00:06:25.403 } 00:06:25.403 ], 00:06:25.403 "driver_specific": { 00:06:25.403 "raid": { 00:06:25.403 "uuid": "b0f5a6f0-1b7b-4a91-a41d-9adcfae13ad5", 00:06:25.403 "strip_size_kb": 0, 00:06:25.403 "state": "online", 00:06:25.403 "raid_level": "raid1", 00:06:25.403 "superblock": false, 00:06:25.403 "num_base_bdevs": 2, 00:06:25.403 "num_base_bdevs_discovered": 2, 00:06:25.403 "num_base_bdevs_operational": 2, 00:06:25.403 "base_bdevs_list": [ 00:06:25.403 { 00:06:25.403 "name": "BaseBdev1", 00:06:25.403 "uuid": "33a24ee3-0425-402f-b458-cfb73ac0bbde", 00:06:25.403 "is_configured": true, 00:06:25.403 "data_offset": 0, 00:06:25.403 "data_size": 65536 00:06:25.403 }, 00:06:25.403 { 00:06:25.403 "name": "BaseBdev2", 00:06:25.403 "uuid": "9b990de6-603d-4b09-8470-682d02fbf602", 00:06:25.403 "is_configured": true, 00:06:25.403 "data_offset": 0, 00:06:25.403 "data_size": 65536 00:06:25.403 } 00:06:25.403 ] 00:06:25.403 } 00:06:25.403 } 00:06:25.403 }' 00:06:25.403 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:25.403 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:25.403 BaseBdev2' 00:06:25.403 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:25.403 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:25.403 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:25.403 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:25.403 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.403 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.404 14:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.404 [2024-10-01 14:31:16.980057] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:25.404 "name": "Existed_Raid", 00:06:25.404 "uuid": "b0f5a6f0-1b7b-4a91-a41d-9adcfae13ad5", 00:06:25.404 "strip_size_kb": 0, 00:06:25.404 "state": "online", 00:06:25.404 "raid_level": "raid1", 00:06:25.404 "superblock": false, 00:06:25.404 "num_base_bdevs": 2, 00:06:25.404 "num_base_bdevs_discovered": 1, 00:06:25.404 "num_base_bdevs_operational": 1, 00:06:25.404 "base_bdevs_list": [ 00:06:25.404 { 00:06:25.404 "name": null, 00:06:25.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:25.404 "is_configured": false, 00:06:25.404 "data_offset": 0, 00:06:25.404 "data_size": 65536 00:06:25.404 }, 00:06:25.404 { 00:06:25.404 "name": "BaseBdev2", 00:06:25.404 "uuid": "9b990de6-603d-4b09-8470-682d02fbf602", 00:06:25.404 "is_configured": true, 00:06:25.404 "data_offset": 0, 00:06:25.404 "data_size": 65536 00:06:25.404 } 00:06:25.404 ] 00:06:25.404 }' 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:25.404 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.975 [2024-10-01 14:31:17.422339] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:25.975 [2024-10-01 14:31:17.422434] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:25.975 [2024-10-01 14:31:17.483534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:25.975 [2024-10-01 14:31:17.483769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:25.975 [2024-10-01 14:31:17.483871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61494 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61494 ']' 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61494 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61494 00:06:25.975 killing process with pid 61494 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61494' 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61494 00:06:25.975 [2024-10-01 14:31:17.543725] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:25.975 14:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61494 00:06:25.975 [2024-10-01 14:31:17.554733] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:26.984 ************************************ 00:06:26.984 END TEST raid_state_function_test 00:06:26.984 ************************************ 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:26.984 00:06:26.984 real 0m4.124s 00:06:26.984 user 0m5.918s 00:06:26.984 sys 0m0.605s 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.984 14:31:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:06:26.984 14:31:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:26.984 14:31:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.984 14:31:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:26.984 ************************************ 00:06:26.984 START TEST raid_state_function_test_sb 00:06:26.984 ************************************ 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:26.984 Process raid pid: 61736 00:06:26.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61736 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61736' 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61736 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61736 ']' 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:26.984 14:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:26.984 [2024-10-01 14:31:18.546679] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:26.984 [2024-10-01 14:31:18.546962] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.259 [2024-10-01 14:31:18.698194] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.259 [2024-10-01 14:31:18.914650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.522 [2024-10-01 14:31:19.056907] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.522 [2024-10-01 14:31:19.057116] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:27.785 [2024-10-01 14:31:19.408798] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:27.785 [2024-10-01 14:31:19.408977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:27.785 [2024-10-01 14:31:19.409045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:27.785 [2024-10-01 14:31:19.409077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:27.785 "name": "Existed_Raid", 00:06:27.785 "uuid": "2d4b685b-2c32-4bf2-9428-7b18c5a9d4fe", 00:06:27.785 "strip_size_kb": 0, 00:06:27.785 "state": "configuring", 00:06:27.785 "raid_level": "raid1", 00:06:27.785 "superblock": true, 00:06:27.785 "num_base_bdevs": 2, 00:06:27.785 "num_base_bdevs_discovered": 0, 00:06:27.785 "num_base_bdevs_operational": 2, 00:06:27.785 "base_bdevs_list": [ 00:06:27.785 { 00:06:27.785 "name": "BaseBdev1", 00:06:27.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:27.785 "is_configured": false, 00:06:27.785 "data_offset": 0, 00:06:27.785 "data_size": 0 00:06:27.785 }, 00:06:27.785 { 00:06:27.785 "name": "BaseBdev2", 00:06:27.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:27.785 "is_configured": false, 00:06:27.785 "data_offset": 0, 00:06:27.785 "data_size": 0 00:06:27.785 } 00:06:27.785 ] 00:06:27.785 }' 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:27.785 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.437 [2024-10-01 14:31:19.744782] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:28.437 [2024-10-01 14:31:19.744823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.437 [2024-10-01 14:31:19.752820] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:28.437 [2024-10-01 14:31:19.752867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:28.437 [2024-10-01 14:31:19.752877] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:28.437 [2024-10-01 14:31:19.752890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.437 [2024-10-01 14:31:19.800515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:28.437 BaseBdev1 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:28.437 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.438 [ 00:06:28.438 { 00:06:28.438 "name": "BaseBdev1", 00:06:28.438 "aliases": [ 00:06:28.438 "6f9d0d27-8f5b-4f1e-bfe4-40f8037b2712" 00:06:28.438 ], 00:06:28.438 "product_name": "Malloc disk", 00:06:28.438 "block_size": 512, 00:06:28.438 "num_blocks": 65536, 00:06:28.438 "uuid": "6f9d0d27-8f5b-4f1e-bfe4-40f8037b2712", 00:06:28.438 "assigned_rate_limits": { 00:06:28.438 "rw_ios_per_sec": 0, 00:06:28.438 "rw_mbytes_per_sec": 0, 00:06:28.438 "r_mbytes_per_sec": 0, 00:06:28.438 "w_mbytes_per_sec": 0 00:06:28.438 }, 00:06:28.438 "claimed": true, 00:06:28.438 "claim_type": "exclusive_write", 00:06:28.438 "zoned": false, 00:06:28.438 "supported_io_types": { 00:06:28.438 "read": true, 00:06:28.438 "write": true, 00:06:28.438 "unmap": true, 00:06:28.438 "flush": true, 00:06:28.438 "reset": true, 00:06:28.438 "nvme_admin": false, 00:06:28.438 "nvme_io": false, 00:06:28.438 "nvme_io_md": false, 00:06:28.438 "write_zeroes": true, 00:06:28.438 "zcopy": true, 00:06:28.438 "get_zone_info": false, 00:06:28.438 "zone_management": false, 00:06:28.438 "zone_append": false, 00:06:28.438 "compare": false, 00:06:28.438 "compare_and_write": false, 00:06:28.438 "abort": true, 00:06:28.438 "seek_hole": false, 00:06:28.438 "seek_data": false, 00:06:28.438 "copy": true, 00:06:28.438 "nvme_iov_md": false 00:06:28.438 }, 00:06:28.438 "memory_domains": [ 00:06:28.438 { 00:06:28.438 "dma_device_id": "system", 00:06:28.438 "dma_device_type": 1 00:06:28.438 }, 00:06:28.438 { 00:06:28.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.438 "dma_device_type": 2 00:06:28.438 } 00:06:28.438 ], 00:06:28.438 "driver_specific": {} 00:06:28.438 } 00:06:28.438 ] 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:28.438 "name": "Existed_Raid", 00:06:28.438 "uuid": "fdfec235-5628-4f48-9925-799704a77f7b", 00:06:28.438 "strip_size_kb": 0, 00:06:28.438 "state": "configuring", 00:06:28.438 "raid_level": "raid1", 00:06:28.438 "superblock": true, 00:06:28.438 "num_base_bdevs": 2, 00:06:28.438 "num_base_bdevs_discovered": 1, 00:06:28.438 "num_base_bdevs_operational": 2, 00:06:28.438 "base_bdevs_list": [ 00:06:28.438 { 00:06:28.438 "name": "BaseBdev1", 00:06:28.438 "uuid": "6f9d0d27-8f5b-4f1e-bfe4-40f8037b2712", 00:06:28.438 "is_configured": true, 00:06:28.438 "data_offset": 2048, 00:06:28.438 "data_size": 63488 00:06:28.438 }, 00:06:28.438 { 00:06:28.438 "name": "BaseBdev2", 00:06:28.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:28.438 "is_configured": false, 00:06:28.438 "data_offset": 0, 00:06:28.438 "data_size": 0 00:06:28.438 } 00:06:28.438 ] 00:06:28.438 }' 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:28.438 14:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.700 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:28.700 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.700 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.700 [2024-10-01 14:31:20.152694] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:28.700 [2024-10-01 14:31:20.152781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:28.700 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.700 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:28.700 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.700 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.701 [2024-10-01 14:31:20.160782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:28.701 [2024-10-01 14:31:20.162864] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:28.701 [2024-10-01 14:31:20.163056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:28.701 "name": "Existed_Raid", 00:06:28.701 "uuid": "055b614d-3d0f-4380-af8d-14d00f81f279", 00:06:28.701 "strip_size_kb": 0, 00:06:28.701 "state": "configuring", 00:06:28.701 "raid_level": "raid1", 00:06:28.701 "superblock": true, 00:06:28.701 "num_base_bdevs": 2, 00:06:28.701 "num_base_bdevs_discovered": 1, 00:06:28.701 "num_base_bdevs_operational": 2, 00:06:28.701 "base_bdevs_list": [ 00:06:28.701 { 00:06:28.701 "name": "BaseBdev1", 00:06:28.701 "uuid": "6f9d0d27-8f5b-4f1e-bfe4-40f8037b2712", 00:06:28.701 "is_configured": true, 00:06:28.701 "data_offset": 2048, 00:06:28.701 "data_size": 63488 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "name": "BaseBdev2", 00:06:28.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:28.701 "is_configured": false, 00:06:28.701 "data_offset": 0, 00:06:28.701 "data_size": 0 00:06:28.701 } 00:06:28.701 ] 00:06:28.701 }' 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:28.701 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.961 [2024-10-01 14:31:20.521989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:28.961 [2024-10-01 14:31:20.522510] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:28.961 [2024-10-01 14:31:20.522534] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:28.961 [2024-10-01 14:31:20.522857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:28.961 BaseBdev2 00:06:28.961 [2024-10-01 14:31:20.523009] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:28.961 [2024-10-01 14:31:20.523027] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:28.961 [2024-10-01 14:31:20.523169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.961 [ 00:06:28.961 { 00:06:28.961 "name": "BaseBdev2", 00:06:28.961 "aliases": [ 00:06:28.961 "1d8f9b60-e5c4-42c5-b8ff-9d876f38b449" 00:06:28.961 ], 00:06:28.961 "product_name": "Malloc disk", 00:06:28.961 "block_size": 512, 00:06:28.961 "num_blocks": 65536, 00:06:28.961 "uuid": "1d8f9b60-e5c4-42c5-b8ff-9d876f38b449", 00:06:28.961 "assigned_rate_limits": { 00:06:28.961 "rw_ios_per_sec": 0, 00:06:28.961 "rw_mbytes_per_sec": 0, 00:06:28.961 "r_mbytes_per_sec": 0, 00:06:28.961 "w_mbytes_per_sec": 0 00:06:28.961 }, 00:06:28.961 "claimed": true, 00:06:28.961 "claim_type": "exclusive_write", 00:06:28.961 "zoned": false, 00:06:28.961 "supported_io_types": { 00:06:28.961 "read": true, 00:06:28.961 "write": true, 00:06:28.961 "unmap": true, 00:06:28.961 "flush": true, 00:06:28.961 "reset": true, 00:06:28.961 "nvme_admin": false, 00:06:28.961 "nvme_io": false, 00:06:28.961 "nvme_io_md": false, 00:06:28.961 "write_zeroes": true, 00:06:28.961 "zcopy": true, 00:06:28.961 "get_zone_info": false, 00:06:28.961 "zone_management": false, 00:06:28.961 "zone_append": false, 00:06:28.961 "compare": false, 00:06:28.961 "compare_and_write": false, 00:06:28.961 "abort": true, 00:06:28.961 "seek_hole": false, 00:06:28.961 "seek_data": false, 00:06:28.961 "copy": true, 00:06:28.961 "nvme_iov_md": false 00:06:28.961 }, 00:06:28.961 "memory_domains": [ 00:06:28.961 { 00:06:28.961 "dma_device_id": "system", 00:06:28.961 "dma_device_type": 1 00:06:28.961 }, 00:06:28.961 { 00:06:28.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.961 "dma_device_type": 2 00:06:28.961 } 00:06:28.961 ], 00:06:28.961 "driver_specific": {} 00:06:28.961 } 00:06:28.961 ] 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.961 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:28.962 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:28.962 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.962 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:28.962 "name": "Existed_Raid", 00:06:28.962 "uuid": "055b614d-3d0f-4380-af8d-14d00f81f279", 00:06:28.962 "strip_size_kb": 0, 00:06:28.962 "state": "online", 00:06:28.962 "raid_level": "raid1", 00:06:28.962 "superblock": true, 00:06:28.962 "num_base_bdevs": 2, 00:06:28.962 "num_base_bdevs_discovered": 2, 00:06:28.962 "num_base_bdevs_operational": 2, 00:06:28.962 "base_bdevs_list": [ 00:06:28.962 { 00:06:28.962 "name": "BaseBdev1", 00:06:28.962 "uuid": "6f9d0d27-8f5b-4f1e-bfe4-40f8037b2712", 00:06:28.962 "is_configured": true, 00:06:28.962 "data_offset": 2048, 00:06:28.962 "data_size": 63488 00:06:28.962 }, 00:06:28.962 { 00:06:28.962 "name": "BaseBdev2", 00:06:28.962 "uuid": "1d8f9b60-e5c4-42c5-b8ff-9d876f38b449", 00:06:28.962 "is_configured": true, 00:06:28.962 "data_offset": 2048, 00:06:28.962 "data_size": 63488 00:06:28.962 } 00:06:28.962 ] 00:06:28.962 }' 00:06:28.962 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:28.962 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:29.221 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:29.221 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:29.221 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:29.221 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:29.221 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:29.221 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:29.221 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:29.221 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:29.221 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.221 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:29.221 [2024-10-01 14:31:20.882460] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:29.221 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.480 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:29.480 "name": "Existed_Raid", 00:06:29.480 "aliases": [ 00:06:29.480 "055b614d-3d0f-4380-af8d-14d00f81f279" 00:06:29.480 ], 00:06:29.480 "product_name": "Raid Volume", 00:06:29.480 "block_size": 512, 00:06:29.480 "num_blocks": 63488, 00:06:29.480 "uuid": "055b614d-3d0f-4380-af8d-14d00f81f279", 00:06:29.480 "assigned_rate_limits": { 00:06:29.480 "rw_ios_per_sec": 0, 00:06:29.480 "rw_mbytes_per_sec": 0, 00:06:29.480 "r_mbytes_per_sec": 0, 00:06:29.480 "w_mbytes_per_sec": 0 00:06:29.480 }, 00:06:29.480 "claimed": false, 00:06:29.480 "zoned": false, 00:06:29.480 "supported_io_types": { 00:06:29.480 "read": true, 00:06:29.480 "write": true, 00:06:29.480 "unmap": false, 00:06:29.480 "flush": false, 00:06:29.480 "reset": true, 00:06:29.480 "nvme_admin": false, 00:06:29.480 "nvme_io": false, 00:06:29.480 "nvme_io_md": false, 00:06:29.480 "write_zeroes": true, 00:06:29.480 "zcopy": false, 00:06:29.480 "get_zone_info": false, 00:06:29.480 "zone_management": false, 00:06:29.480 "zone_append": false, 00:06:29.480 "compare": false, 00:06:29.480 "compare_and_write": false, 00:06:29.480 "abort": false, 00:06:29.480 "seek_hole": false, 00:06:29.480 "seek_data": false, 00:06:29.480 "copy": false, 00:06:29.480 "nvme_iov_md": false 00:06:29.480 }, 00:06:29.480 "memory_domains": [ 00:06:29.480 { 00:06:29.480 "dma_device_id": "system", 00:06:29.480 "dma_device_type": 1 00:06:29.480 }, 00:06:29.480 { 00:06:29.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.480 "dma_device_type": 2 00:06:29.480 }, 00:06:29.480 { 00:06:29.480 "dma_device_id": "system", 00:06:29.480 "dma_device_type": 1 00:06:29.480 }, 00:06:29.480 { 00:06:29.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.480 "dma_device_type": 2 00:06:29.480 } 00:06:29.480 ], 00:06:29.480 "driver_specific": { 00:06:29.480 "raid": { 00:06:29.480 "uuid": "055b614d-3d0f-4380-af8d-14d00f81f279", 00:06:29.480 "strip_size_kb": 0, 00:06:29.480 "state": "online", 00:06:29.480 "raid_level": "raid1", 00:06:29.480 "superblock": true, 00:06:29.480 "num_base_bdevs": 2, 00:06:29.480 "num_base_bdevs_discovered": 2, 00:06:29.480 "num_base_bdevs_operational": 2, 00:06:29.480 "base_bdevs_list": [ 00:06:29.480 { 00:06:29.480 "name": "BaseBdev1", 00:06:29.480 "uuid": "6f9d0d27-8f5b-4f1e-bfe4-40f8037b2712", 00:06:29.480 "is_configured": true, 00:06:29.480 "data_offset": 2048, 00:06:29.480 "data_size": 63488 00:06:29.480 }, 00:06:29.480 { 00:06:29.480 "name": "BaseBdev2", 00:06:29.480 "uuid": "1d8f9b60-e5c4-42c5-b8ff-9d876f38b449", 00:06:29.480 "is_configured": true, 00:06:29.480 "data_offset": 2048, 00:06:29.480 "data_size": 63488 00:06:29.480 } 00:06:29.480 ] 00:06:29.480 } 00:06:29.480 } 00:06:29.480 }' 00:06:29.480 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:29.480 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:29.480 BaseBdev2' 00:06:29.480 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:29.480 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:29.480 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:29.480 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:29.480 14:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:29.480 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.480 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:29.480 14:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.480 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:29.480 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:29.480 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:29.481 [2024-10-01 14:31:21.054257] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:29.481 "name": "Existed_Raid", 00:06:29.481 "uuid": "055b614d-3d0f-4380-af8d-14d00f81f279", 00:06:29.481 "strip_size_kb": 0, 00:06:29.481 "state": "online", 00:06:29.481 "raid_level": "raid1", 00:06:29.481 "superblock": true, 00:06:29.481 "num_base_bdevs": 2, 00:06:29.481 "num_base_bdevs_discovered": 1, 00:06:29.481 "num_base_bdevs_operational": 1, 00:06:29.481 "base_bdevs_list": [ 00:06:29.481 { 00:06:29.481 "name": null, 00:06:29.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:29.481 "is_configured": false, 00:06:29.481 "data_offset": 0, 00:06:29.481 "data_size": 63488 00:06:29.481 }, 00:06:29.481 { 00:06:29.481 "name": "BaseBdev2", 00:06:29.481 "uuid": "1d8f9b60-e5c4-42c5-b8ff-9d876f38b449", 00:06:29.481 "is_configured": true, 00:06:29.481 "data_offset": 2048, 00:06:29.481 "data_size": 63488 00:06:29.481 } 00:06:29.481 ] 00:06:29.481 }' 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:29.481 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:30.051 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:30.052 [2024-10-01 14:31:21.489825] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:30.052 [2024-10-01 14:31:21.489961] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:30.052 [2024-10-01 14:31:21.555551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.052 [2024-10-01 14:31:21.555622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:30.052 [2024-10-01 14:31:21.555634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61736 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61736 ']' 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61736 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61736 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61736' 00:06:30.052 killing process with pid 61736 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61736 00:06:30.052 14:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61736 00:06:30.052 [2024-10-01 14:31:21.622701] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:30.052 [2024-10-01 14:31:21.634022] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:30.990 14:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:30.990 00:06:30.990 real 0m4.035s 00:06:30.990 user 0m5.721s 00:06:30.990 sys 0m0.616s 00:06:30.990 14:31:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.990 ************************************ 00:06:30.990 END TEST raid_state_function_test_sb 00:06:30.990 ************************************ 00:06:30.990 14:31:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:30.990 14:31:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:06:30.990 14:31:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:30.990 14:31:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.990 14:31:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:30.990 ************************************ 00:06:30.990 START TEST raid_superblock_test 00:06:30.990 ************************************ 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:06:30.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61977 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61977 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61977 ']' 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.990 14:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:30.990 [2024-10-01 14:31:22.646049] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:30.990 [2024-10-01 14:31:22.646425] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61977 ] 00:06:31.249 [2024-10-01 14:31:22.796122] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.508 [2024-10-01 14:31:23.016917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.508 [2024-10-01 14:31:23.164601] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.508 [2024-10-01 14:31:23.164675] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 malloc1 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 [2024-10-01 14:31:23.546084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:32.077 [2024-10-01 14:31:23.546220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:32.077 [2024-10-01 14:31:23.546249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:32.077 [2024-10-01 14:31:23.546265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:32.077 [2024-10-01 14:31:23.548645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:32.077 [2024-10-01 14:31:23.548686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:32.077 pt1 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:32.077 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.078 malloc2 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.078 [2024-10-01 14:31:23.610886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:32.078 [2024-10-01 14:31:23.610977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:32.078 [2024-10-01 14:31:23.611007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:32.078 [2024-10-01 14:31:23.611017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:32.078 [2024-10-01 14:31:23.613409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:32.078 [2024-10-01 14:31:23.613801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:32.078 pt2 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.078 [2024-10-01 14:31:23.618922] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:32.078 [2024-10-01 14:31:23.620983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:32.078 [2024-10-01 14:31:23.621155] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:32.078 [2024-10-01 14:31:23.621171] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:32.078 [2024-10-01 14:31:23.621470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:32.078 [2024-10-01 14:31:23.621654] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:32.078 [2024-10-01 14:31:23.621666] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:06:32.078 [2024-10-01 14:31:23.621855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:32.078 "name": "raid_bdev1", 00:06:32.078 "uuid": "17224fb8-778d-457d-95d2-a741af9db9cb", 00:06:32.078 "strip_size_kb": 0, 00:06:32.078 "state": "online", 00:06:32.078 "raid_level": "raid1", 00:06:32.078 "superblock": true, 00:06:32.078 "num_base_bdevs": 2, 00:06:32.078 "num_base_bdevs_discovered": 2, 00:06:32.078 "num_base_bdevs_operational": 2, 00:06:32.078 "base_bdevs_list": [ 00:06:32.078 { 00:06:32.078 "name": "pt1", 00:06:32.078 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:32.078 "is_configured": true, 00:06:32.078 "data_offset": 2048, 00:06:32.078 "data_size": 63488 00:06:32.078 }, 00:06:32.078 { 00:06:32.078 "name": "pt2", 00:06:32.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:32.078 "is_configured": true, 00:06:32.078 "data_offset": 2048, 00:06:32.078 "data_size": 63488 00:06:32.078 } 00:06:32.078 ] 00:06:32.078 }' 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:32.078 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.338 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:32.338 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:32.338 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:32.338 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:32.338 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:32.338 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:32.338 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:32.338 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:32.338 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.338 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.338 [2024-10-01 14:31:23.939268] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:32.338 14:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.338 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:32.338 "name": "raid_bdev1", 00:06:32.338 "aliases": [ 00:06:32.338 "17224fb8-778d-457d-95d2-a741af9db9cb" 00:06:32.338 ], 00:06:32.338 "product_name": "Raid Volume", 00:06:32.338 "block_size": 512, 00:06:32.338 "num_blocks": 63488, 00:06:32.338 "uuid": "17224fb8-778d-457d-95d2-a741af9db9cb", 00:06:32.338 "assigned_rate_limits": { 00:06:32.338 "rw_ios_per_sec": 0, 00:06:32.338 "rw_mbytes_per_sec": 0, 00:06:32.338 "r_mbytes_per_sec": 0, 00:06:32.338 "w_mbytes_per_sec": 0 00:06:32.338 }, 00:06:32.338 "claimed": false, 00:06:32.338 "zoned": false, 00:06:32.338 "supported_io_types": { 00:06:32.338 "read": true, 00:06:32.338 "write": true, 00:06:32.338 "unmap": false, 00:06:32.338 "flush": false, 00:06:32.338 "reset": true, 00:06:32.338 "nvme_admin": false, 00:06:32.338 "nvme_io": false, 00:06:32.338 "nvme_io_md": false, 00:06:32.338 "write_zeroes": true, 00:06:32.338 "zcopy": false, 00:06:32.338 "get_zone_info": false, 00:06:32.339 "zone_management": false, 00:06:32.339 "zone_append": false, 00:06:32.339 "compare": false, 00:06:32.339 "compare_and_write": false, 00:06:32.339 "abort": false, 00:06:32.339 "seek_hole": false, 00:06:32.339 "seek_data": false, 00:06:32.339 "copy": false, 00:06:32.339 "nvme_iov_md": false 00:06:32.339 }, 00:06:32.339 "memory_domains": [ 00:06:32.339 { 00:06:32.339 "dma_device_id": "system", 00:06:32.339 "dma_device_type": 1 00:06:32.339 }, 00:06:32.339 { 00:06:32.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.339 "dma_device_type": 2 00:06:32.339 }, 00:06:32.339 { 00:06:32.339 "dma_device_id": "system", 00:06:32.339 "dma_device_type": 1 00:06:32.339 }, 00:06:32.339 { 00:06:32.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.339 "dma_device_type": 2 00:06:32.339 } 00:06:32.339 ], 00:06:32.339 "driver_specific": { 00:06:32.339 "raid": { 00:06:32.339 "uuid": "17224fb8-778d-457d-95d2-a741af9db9cb", 00:06:32.339 "strip_size_kb": 0, 00:06:32.339 "state": "online", 00:06:32.339 "raid_level": "raid1", 00:06:32.339 "superblock": true, 00:06:32.339 "num_base_bdevs": 2, 00:06:32.339 "num_base_bdevs_discovered": 2, 00:06:32.339 "num_base_bdevs_operational": 2, 00:06:32.339 "base_bdevs_list": [ 00:06:32.339 { 00:06:32.339 "name": "pt1", 00:06:32.339 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:32.339 "is_configured": true, 00:06:32.339 "data_offset": 2048, 00:06:32.339 "data_size": 63488 00:06:32.339 }, 00:06:32.339 { 00:06:32.339 "name": "pt2", 00:06:32.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:32.339 "is_configured": true, 00:06:32.339 "data_offset": 2048, 00:06:32.339 "data_size": 63488 00:06:32.339 } 00:06:32.339 ] 00:06:32.339 } 00:06:32.339 } 00:06:32.339 }' 00:06:32.339 14:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:32.339 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:32.339 pt2' 00:06:32.339 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.599 [2024-10-01 14:31:24.127291] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=17224fb8-778d-457d-95d2-a741af9db9cb 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 17224fb8-778d-457d-95d2-a741af9db9cb ']' 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.599 [2024-10-01 14:31:24.167018] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:32.599 [2024-10-01 14:31:24.167183] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:32.599 [2024-10-01 14:31:24.167329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:32.599 [2024-10-01 14:31:24.167450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:32.599 [2024-10-01 14:31:24.167573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.599 [2024-10-01 14:31:24.263035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:32.599 [2024-10-01 14:31:24.265166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:32.599 [2024-10-01 14:31:24.265256] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:32.599 [2024-10-01 14:31:24.265316] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:32.599 [2024-10-01 14:31:24.265331] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:32.599 [2024-10-01 14:31:24.265343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:06:32.599 request: 00:06:32.599 { 00:06:32.599 "name": "raid_bdev1", 00:06:32.599 "raid_level": "raid1", 00:06:32.599 "base_bdevs": [ 00:06:32.599 "malloc1", 00:06:32.599 "malloc2" 00:06:32.599 ], 00:06:32.599 "superblock": false, 00:06:32.599 "method": "bdev_raid_create", 00:06:32.599 "req_id": 1 00:06:32.599 } 00:06:32.599 Got JSON-RPC error response 00:06:32.599 response: 00:06:32.599 { 00:06:32.599 "code": -17, 00:06:32.599 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:32.599 } 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:32.599 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.859 [2024-10-01 14:31:24.307022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:32.859 [2024-10-01 14:31:24.307241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:32.859 [2024-10-01 14:31:24.307266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:32.859 [2024-10-01 14:31:24.307278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:32.859 [2024-10-01 14:31:24.309731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:32.859 [2024-10-01 14:31:24.309773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:32.859 [2024-10-01 14:31:24.309874] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:32.859 [2024-10-01 14:31:24.309935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:32.859 pt1 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.859 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:32.859 "name": "raid_bdev1", 00:06:32.859 "uuid": "17224fb8-778d-457d-95d2-a741af9db9cb", 00:06:32.859 "strip_size_kb": 0, 00:06:32.860 "state": "configuring", 00:06:32.860 "raid_level": "raid1", 00:06:32.860 "superblock": true, 00:06:32.860 "num_base_bdevs": 2, 00:06:32.860 "num_base_bdevs_discovered": 1, 00:06:32.860 "num_base_bdevs_operational": 2, 00:06:32.860 "base_bdevs_list": [ 00:06:32.860 { 00:06:32.860 "name": "pt1", 00:06:32.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:32.860 "is_configured": true, 00:06:32.860 "data_offset": 2048, 00:06:32.860 "data_size": 63488 00:06:32.860 }, 00:06:32.860 { 00:06:32.860 "name": null, 00:06:32.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:32.860 "is_configured": false, 00:06:32.860 "data_offset": 2048, 00:06:32.860 "data_size": 63488 00:06:32.860 } 00:06:32.860 ] 00:06:32.860 }' 00:06:32.860 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:32.860 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.120 [2024-10-01 14:31:24.603072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:33.120 [2024-10-01 14:31:24.603156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:33.120 [2024-10-01 14:31:24.603178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:06:33.120 [2024-10-01 14:31:24.603190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:33.120 [2024-10-01 14:31:24.603723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:33.120 [2024-10-01 14:31:24.603748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:33.120 [2024-10-01 14:31:24.603834] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:33.120 [2024-10-01 14:31:24.603858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:33.120 [2024-10-01 14:31:24.603976] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:33.120 [2024-10-01 14:31:24.603989] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:33.120 [2024-10-01 14:31:24.604239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:33.120 [2024-10-01 14:31:24.604389] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:33.120 [2024-10-01 14:31:24.604398] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:33.120 [2024-10-01 14:31:24.604534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.120 pt2 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:33.120 "name": "raid_bdev1", 00:06:33.120 "uuid": "17224fb8-778d-457d-95d2-a741af9db9cb", 00:06:33.120 "strip_size_kb": 0, 00:06:33.120 "state": "online", 00:06:33.120 "raid_level": "raid1", 00:06:33.120 "superblock": true, 00:06:33.120 "num_base_bdevs": 2, 00:06:33.120 "num_base_bdevs_discovered": 2, 00:06:33.120 "num_base_bdevs_operational": 2, 00:06:33.120 "base_bdevs_list": [ 00:06:33.120 { 00:06:33.120 "name": "pt1", 00:06:33.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:33.120 "is_configured": true, 00:06:33.120 "data_offset": 2048, 00:06:33.120 "data_size": 63488 00:06:33.120 }, 00:06:33.120 { 00:06:33.120 "name": "pt2", 00:06:33.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:33.120 "is_configured": true, 00:06:33.120 "data_offset": 2048, 00:06:33.120 "data_size": 63488 00:06:33.120 } 00:06:33.120 ] 00:06:33.120 }' 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:33.120 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.453 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:33.453 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:33.453 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:33.453 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:33.453 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:33.453 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:33.453 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:33.453 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:33.453 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.453 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.453 [2024-10-01 14:31:24.931455] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:33.453 14:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.453 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:33.453 "name": "raid_bdev1", 00:06:33.453 "aliases": [ 00:06:33.453 "17224fb8-778d-457d-95d2-a741af9db9cb" 00:06:33.453 ], 00:06:33.453 "product_name": "Raid Volume", 00:06:33.453 "block_size": 512, 00:06:33.453 "num_blocks": 63488, 00:06:33.453 "uuid": "17224fb8-778d-457d-95d2-a741af9db9cb", 00:06:33.453 "assigned_rate_limits": { 00:06:33.453 "rw_ios_per_sec": 0, 00:06:33.453 "rw_mbytes_per_sec": 0, 00:06:33.453 "r_mbytes_per_sec": 0, 00:06:33.453 "w_mbytes_per_sec": 0 00:06:33.453 }, 00:06:33.453 "claimed": false, 00:06:33.453 "zoned": false, 00:06:33.453 "supported_io_types": { 00:06:33.454 "read": true, 00:06:33.454 "write": true, 00:06:33.454 "unmap": false, 00:06:33.454 "flush": false, 00:06:33.454 "reset": true, 00:06:33.454 "nvme_admin": false, 00:06:33.454 "nvme_io": false, 00:06:33.454 "nvme_io_md": false, 00:06:33.454 "write_zeroes": true, 00:06:33.454 "zcopy": false, 00:06:33.454 "get_zone_info": false, 00:06:33.454 "zone_management": false, 00:06:33.454 "zone_append": false, 00:06:33.454 "compare": false, 00:06:33.454 "compare_and_write": false, 00:06:33.454 "abort": false, 00:06:33.454 "seek_hole": false, 00:06:33.454 "seek_data": false, 00:06:33.454 "copy": false, 00:06:33.454 "nvme_iov_md": false 00:06:33.454 }, 00:06:33.454 "memory_domains": [ 00:06:33.454 { 00:06:33.454 "dma_device_id": "system", 00:06:33.454 "dma_device_type": 1 00:06:33.454 }, 00:06:33.454 { 00:06:33.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.454 "dma_device_type": 2 00:06:33.454 }, 00:06:33.454 { 00:06:33.454 "dma_device_id": "system", 00:06:33.454 "dma_device_type": 1 00:06:33.454 }, 00:06:33.454 { 00:06:33.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.454 "dma_device_type": 2 00:06:33.454 } 00:06:33.454 ], 00:06:33.454 "driver_specific": { 00:06:33.454 "raid": { 00:06:33.454 "uuid": "17224fb8-778d-457d-95d2-a741af9db9cb", 00:06:33.454 "strip_size_kb": 0, 00:06:33.454 "state": "online", 00:06:33.454 "raid_level": "raid1", 00:06:33.454 "superblock": true, 00:06:33.454 "num_base_bdevs": 2, 00:06:33.454 "num_base_bdevs_discovered": 2, 00:06:33.454 "num_base_bdevs_operational": 2, 00:06:33.454 "base_bdevs_list": [ 00:06:33.454 { 00:06:33.454 "name": "pt1", 00:06:33.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:33.454 "is_configured": true, 00:06:33.454 "data_offset": 2048, 00:06:33.454 "data_size": 63488 00:06:33.454 }, 00:06:33.454 { 00:06:33.454 "name": "pt2", 00:06:33.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:33.454 "is_configured": true, 00:06:33.454 "data_offset": 2048, 00:06:33.454 "data_size": 63488 00:06:33.454 } 00:06:33.454 ] 00:06:33.454 } 00:06:33.454 } 00:06:33.454 }' 00:06:33.454 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:33.454 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:33.454 pt2' 00:06:33.454 14:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.454 [2024-10-01 14:31:25.103482] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:33.454 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.714 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 17224fb8-778d-457d-95d2-a741af9db9cb '!=' 17224fb8-778d-457d-95d2-a741af9db9cb ']' 00:06:33.714 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:06:33.714 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:33.714 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:06:33.714 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:06:33.714 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.714 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.714 [2024-10-01 14:31:25.143292] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:33.715 "name": "raid_bdev1", 00:06:33.715 "uuid": "17224fb8-778d-457d-95d2-a741af9db9cb", 00:06:33.715 "strip_size_kb": 0, 00:06:33.715 "state": "online", 00:06:33.715 "raid_level": "raid1", 00:06:33.715 "superblock": true, 00:06:33.715 "num_base_bdevs": 2, 00:06:33.715 "num_base_bdevs_discovered": 1, 00:06:33.715 "num_base_bdevs_operational": 1, 00:06:33.715 "base_bdevs_list": [ 00:06:33.715 { 00:06:33.715 "name": null, 00:06:33.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.715 "is_configured": false, 00:06:33.715 "data_offset": 0, 00:06:33.715 "data_size": 63488 00:06:33.715 }, 00:06:33.715 { 00:06:33.715 "name": "pt2", 00:06:33.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:33.715 "is_configured": true, 00:06:33.715 "data_offset": 2048, 00:06:33.715 "data_size": 63488 00:06:33.715 } 00:06:33.715 ] 00:06:33.715 }' 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:33.715 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.975 [2024-10-01 14:31:25.475280] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:33.975 [2024-10-01 14:31:25.475319] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:33.975 [2024-10-01 14:31:25.475406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:33.975 [2024-10-01 14:31:25.475459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:33.975 [2024-10-01 14:31:25.475471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.975 [2024-10-01 14:31:25.527305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:33.975 [2024-10-01 14:31:25.527379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:33.975 [2024-10-01 14:31:25.527397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:06:33.975 [2024-10-01 14:31:25.527408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:33.975 [2024-10-01 14:31:25.529903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:33.975 [2024-10-01 14:31:25.530066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:33.975 [2024-10-01 14:31:25.530184] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:33.975 [2024-10-01 14:31:25.530238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:33.975 [2024-10-01 14:31:25.530348] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:06:33.975 [2024-10-01 14:31:25.530363] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:33.975 [2024-10-01 14:31:25.530631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:33.975 [2024-10-01 14:31:25.530793] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:06:33.975 [2024-10-01 14:31:25.530802] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:06:33.975 [2024-10-01 14:31:25.530944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.975 pt2 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.975 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:33.975 "name": "raid_bdev1", 00:06:33.975 "uuid": "17224fb8-778d-457d-95d2-a741af9db9cb", 00:06:33.975 "strip_size_kb": 0, 00:06:33.976 "state": "online", 00:06:33.976 "raid_level": "raid1", 00:06:33.976 "superblock": true, 00:06:33.976 "num_base_bdevs": 2, 00:06:33.976 "num_base_bdevs_discovered": 1, 00:06:33.976 "num_base_bdevs_operational": 1, 00:06:33.976 "base_bdevs_list": [ 00:06:33.976 { 00:06:33.976 "name": null, 00:06:33.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.976 "is_configured": false, 00:06:33.976 "data_offset": 2048, 00:06:33.976 "data_size": 63488 00:06:33.976 }, 00:06:33.976 { 00:06:33.976 "name": "pt2", 00:06:33.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:33.976 "is_configured": true, 00:06:33.976 "data_offset": 2048, 00:06:33.976 "data_size": 63488 00:06:33.976 } 00:06:33.976 ] 00:06:33.976 }' 00:06:33.976 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:33.976 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.237 [2024-10-01 14:31:25.855341] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:34.237 [2024-10-01 14:31:25.855508] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:34.237 [2024-10-01 14:31:25.855653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.237 [2024-10-01 14:31:25.855871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:34.237 [2024-10-01 14:31:25.855959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.237 [2024-10-01 14:31:25.895383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:34.237 [2024-10-01 14:31:25.895459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:34.237 [2024-10-01 14:31:25.895481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:06:34.237 [2024-10-01 14:31:25.895491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:34.237 [2024-10-01 14:31:25.897984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:34.237 [2024-10-01 14:31:25.898041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:34.237 [2024-10-01 14:31:25.898145] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:34.237 [2024-10-01 14:31:25.898191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:34.237 [2024-10-01 14:31:25.898323] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:06:34.237 [2024-10-01 14:31:25.898334] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:34.237 [2024-10-01 14:31:25.898356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:06:34.237 [2024-10-01 14:31:25.898406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:34.237 [2024-10-01 14:31:25.898481] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:06:34.237 [2024-10-01 14:31:25.898490] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:34.237 [2024-10-01 14:31:25.898769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:06:34.237 [2024-10-01 14:31:25.898903] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:06:34.237 [2024-10-01 14:31:25.898918] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:06:34.237 [2024-10-01 14:31:25.899060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:34.237 pt1 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.237 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.498 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:34.498 "name": "raid_bdev1", 00:06:34.498 "uuid": "17224fb8-778d-457d-95d2-a741af9db9cb", 00:06:34.498 "strip_size_kb": 0, 00:06:34.498 "state": "online", 00:06:34.498 "raid_level": "raid1", 00:06:34.498 "superblock": true, 00:06:34.498 "num_base_bdevs": 2, 00:06:34.498 "num_base_bdevs_discovered": 1, 00:06:34.498 "num_base_bdevs_operational": 1, 00:06:34.498 "base_bdevs_list": [ 00:06:34.498 { 00:06:34.498 "name": null, 00:06:34.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.498 "is_configured": false, 00:06:34.498 "data_offset": 2048, 00:06:34.498 "data_size": 63488 00:06:34.498 }, 00:06:34.498 { 00:06:34.498 "name": "pt2", 00:06:34.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:34.498 "is_configured": true, 00:06:34.498 "data_offset": 2048, 00:06:34.498 "data_size": 63488 00:06:34.498 } 00:06:34.498 ] 00:06:34.498 }' 00:06:34.498 14:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:34.498 14:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.760 [2024-10-01 14:31:26.263739] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 17224fb8-778d-457d-95d2-a741af9db9cb '!=' 17224fb8-778d-457d-95d2-a741af9db9cb ']' 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61977 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61977 ']' 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61977 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61977 00:06:34.760 killing process with pid 61977 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61977' 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61977 00:06:34.760 [2024-10-01 14:31:26.316998] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:34.760 14:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61977 00:06:34.760 [2024-10-01 14:31:26.317113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.760 [2024-10-01 14:31:26.317169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:34.760 [2024-10-01 14:31:26.317188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:06:35.020 [2024-10-01 14:31:26.454786] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:35.962 ************************************ 00:06:35.962 END TEST raid_superblock_test 00:06:35.962 ************************************ 00:06:35.962 14:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:35.962 00:06:35.962 real 0m4.773s 00:06:35.962 user 0m7.035s 00:06:35.962 sys 0m0.801s 00:06:35.962 14:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.962 14:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.962 14:31:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:06:35.962 14:31:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:35.962 14:31:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.962 14:31:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:35.962 ************************************ 00:06:35.962 START TEST raid_read_error_test 00:06:35.962 ************************************ 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.z4Oyg5UhDD 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62291 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62291 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62291 ']' 00:06:35.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.962 14:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:35.962 [2024-10-01 14:31:27.490926] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:35.962 [2024-10-01 14:31:27.491250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62291 ] 00:06:35.962 [2024-10-01 14:31:27.637952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.223 [2024-10-01 14:31:27.836953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.483 [2024-10-01 14:31:27.974475] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.483 [2024-10-01 14:31:27.974519] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.745 BaseBdev1_malloc 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.745 true 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.745 [2024-10-01 14:31:28.384570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:36.745 [2024-10-01 14:31:28.385264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.745 [2024-10-01 14:31:28.385291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:36.745 [2024-10-01 14:31:28.385303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.745 [2024-10-01 14:31:28.387493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.745 [2024-10-01 14:31:28.387530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:36.745 BaseBdev1 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.745 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.007 BaseBdev2_malloc 00:06:37.007 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.007 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:37.007 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.007 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.007 true 00:06:37.007 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.007 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:37.007 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.007 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.007 [2024-10-01 14:31:28.445798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:37.007 [2024-10-01 14:31:28.445854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.007 [2024-10-01 14:31:28.445871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:37.007 [2024-10-01 14:31:28.445881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.007 [2024-10-01 14:31:28.447982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.007 [2024-10-01 14:31:28.448120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:37.007 BaseBdev2 00:06:37.007 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.008 [2024-10-01 14:31:28.453856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:37.008 [2024-10-01 14:31:28.455723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:37.008 [2024-10-01 14:31:28.455916] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:37.008 [2024-10-01 14:31:28.455929] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:37.008 [2024-10-01 14:31:28.456177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:37.008 [2024-10-01 14:31:28.456326] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:37.008 [2024-10-01 14:31:28.456335] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:37.008 [2024-10-01 14:31:28.456480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.008 "name": "raid_bdev1", 00:06:37.008 "uuid": "52c17db2-87ee-4cd0-9b1b-64e5625dfa60", 00:06:37.008 "strip_size_kb": 0, 00:06:37.008 "state": "online", 00:06:37.008 "raid_level": "raid1", 00:06:37.008 "superblock": true, 00:06:37.008 "num_base_bdevs": 2, 00:06:37.008 "num_base_bdevs_discovered": 2, 00:06:37.008 "num_base_bdevs_operational": 2, 00:06:37.008 "base_bdevs_list": [ 00:06:37.008 { 00:06:37.008 "name": "BaseBdev1", 00:06:37.008 "uuid": "3230eb31-3a0d-5061-88c0-6cf7d86b63f2", 00:06:37.008 "is_configured": true, 00:06:37.008 "data_offset": 2048, 00:06:37.008 "data_size": 63488 00:06:37.008 }, 00:06:37.008 { 00:06:37.008 "name": "BaseBdev2", 00:06:37.008 "uuid": "03762bde-24cc-539f-86eb-bcfe2ab86809", 00:06:37.008 "is_configured": true, 00:06:37.008 "data_offset": 2048, 00:06:37.008 "data_size": 63488 00:06:37.008 } 00:06:37.008 ] 00:06:37.008 }' 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.008 14:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.269 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:37.269 14:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:37.269 [2024-10-01 14:31:28.866900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.209 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.210 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.210 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.210 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:38.210 14:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.210 14:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.210 14:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.210 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.210 "name": "raid_bdev1", 00:06:38.210 "uuid": "52c17db2-87ee-4cd0-9b1b-64e5625dfa60", 00:06:38.210 "strip_size_kb": 0, 00:06:38.210 "state": "online", 00:06:38.210 "raid_level": "raid1", 00:06:38.210 "superblock": true, 00:06:38.210 "num_base_bdevs": 2, 00:06:38.210 "num_base_bdevs_discovered": 2, 00:06:38.210 "num_base_bdevs_operational": 2, 00:06:38.210 "base_bdevs_list": [ 00:06:38.210 { 00:06:38.210 "name": "BaseBdev1", 00:06:38.210 "uuid": "3230eb31-3a0d-5061-88c0-6cf7d86b63f2", 00:06:38.210 "is_configured": true, 00:06:38.210 "data_offset": 2048, 00:06:38.210 "data_size": 63488 00:06:38.210 }, 00:06:38.210 { 00:06:38.210 "name": "BaseBdev2", 00:06:38.210 "uuid": "03762bde-24cc-539f-86eb-bcfe2ab86809", 00:06:38.210 "is_configured": true, 00:06:38.210 "data_offset": 2048, 00:06:38.210 "data_size": 63488 00:06:38.210 } 00:06:38.210 ] 00:06:38.210 }' 00:06:38.210 14:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.210 14:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.469 14:31:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:38.469 14:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.469 14:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.469 [2024-10-01 14:31:30.134931] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:38.469 [2024-10-01 14:31:30.134975] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:38.469 [2024-10-01 14:31:30.138030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.469 [2024-10-01 14:31:30.138078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.469 [2024-10-01 14:31:30.138160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.469 [2024-10-01 14:31:30.138173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:38.469 { 00:06:38.469 "results": [ 00:06:38.469 { 00:06:38.469 "job": "raid_bdev1", 00:06:38.469 "core_mask": "0x1", 00:06:38.469 "workload": "randrw", 00:06:38.469 "percentage": 50, 00:06:38.469 "status": "finished", 00:06:38.469 "queue_depth": 1, 00:06:38.469 "io_size": 131072, 00:06:38.469 "runtime": 1.266217, 00:06:38.469 "iops": 17553.073446336606, 00:06:38.469 "mibps": 2194.1341807920758, 00:06:38.469 "io_failed": 0, 00:06:38.469 "io_timeout": 0, 00:06:38.469 "avg_latency_us": 53.79759754687857, 00:06:38.469 "min_latency_us": 29.144615384615385, 00:06:38.469 "max_latency_us": 1676.2092307692308 00:06:38.469 } 00:06:38.469 ], 00:06:38.469 "core_count": 1 00:06:38.469 } 00:06:38.469 14:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.469 14:31:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62291 00:06:38.469 14:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62291 ']' 00:06:38.469 14:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62291 00:06:38.469 14:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:06:38.469 14:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.469 14:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62291 00:06:38.729 killing process with pid 62291 00:06:38.729 14:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.729 14:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.729 14:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62291' 00:06:38.729 14:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62291 00:06:38.729 [2024-10-01 14:31:30.168433] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:38.729 14:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62291 00:06:38.729 [2024-10-01 14:31:30.254281] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:39.672 14:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.z4Oyg5UhDD 00:06:39.672 14:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:39.672 14:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:39.672 14:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:06:39.672 14:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:06:39.672 14:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:39.672 14:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:06:39.672 14:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:06:39.672 00:06:39.672 real 0m3.708s 00:06:39.672 user 0m4.397s 00:06:39.672 sys 0m0.405s 00:06:39.672 ************************************ 00:06:39.672 END TEST raid_read_error_test 00:06:39.672 ************************************ 00:06:39.672 14:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.672 14:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.672 14:31:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:06:39.672 14:31:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:39.673 14:31:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.673 14:31:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:39.673 ************************************ 00:06:39.673 START TEST raid_write_error_test 00:06:39.673 ************************************ 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2H0J0b5P4W 00:06:39.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62431 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62431 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62431 ']' 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:39.673 14:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.673 [2024-10-01 14:31:31.269789] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:39.673 [2024-10-01 14:31:31.269918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62431 ] 00:06:39.933 [2024-10-01 14:31:31.422257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.933 [2024-10-01 14:31:31.614046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.194 [2024-10-01 14:31:31.752656] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.194 [2024-10-01 14:31:31.752721] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.455 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.455 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:06:40.455 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:40.455 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:40.455 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.455 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.721 BaseBdev1_malloc 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.721 true 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.721 [2024-10-01 14:31:32.172042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:40.721 [2024-10-01 14:31:32.172103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.721 [2024-10-01 14:31:32.172121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:40.721 [2024-10-01 14:31:32.172133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.721 [2024-10-01 14:31:32.174382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.721 [2024-10-01 14:31:32.174427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:40.721 BaseBdev1 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.721 BaseBdev2_malloc 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.721 true 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.721 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.722 [2024-10-01 14:31:32.234122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:40.722 [2024-10-01 14:31:32.234185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.722 [2024-10-01 14:31:32.234202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:40.722 [2024-10-01 14:31:32.234213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.722 [2024-10-01 14:31:32.236390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.722 [2024-10-01 14:31:32.236431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:40.722 BaseBdev2 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.722 [2024-10-01 14:31:32.242208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:40.722 [2024-10-01 14:31:32.244076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:40.722 [2024-10-01 14:31:32.244277] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:40.722 [2024-10-01 14:31:32.244292] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:06:40.722 [2024-10-01 14:31:32.244556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:40.722 [2024-10-01 14:31:32.244735] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:40.722 [2024-10-01 14:31:32.244748] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:40.722 [2024-10-01 14:31:32.244904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:40.722 "name": "raid_bdev1", 00:06:40.722 "uuid": "e8cbbb0d-59be-4f68-a708-71cc12a0977d", 00:06:40.722 "strip_size_kb": 0, 00:06:40.722 "state": "online", 00:06:40.722 "raid_level": "raid1", 00:06:40.722 "superblock": true, 00:06:40.722 "num_base_bdevs": 2, 00:06:40.722 "num_base_bdevs_discovered": 2, 00:06:40.722 "num_base_bdevs_operational": 2, 00:06:40.722 "base_bdevs_list": [ 00:06:40.722 { 00:06:40.722 "name": "BaseBdev1", 00:06:40.722 "uuid": "ba1df267-9c4c-51be-a016-3215f1067c21", 00:06:40.722 "is_configured": true, 00:06:40.722 "data_offset": 2048, 00:06:40.722 "data_size": 63488 00:06:40.722 }, 00:06:40.722 { 00:06:40.722 "name": "BaseBdev2", 00:06:40.722 "uuid": "21ce7ce3-43ef-54fa-8fcc-9d8621bc8cb0", 00:06:40.722 "is_configured": true, 00:06:40.722 "data_offset": 2048, 00:06:40.722 "data_size": 63488 00:06:40.722 } 00:06:40.722 ] 00:06:40.722 }' 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:40.722 14:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.023 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:41.023 14:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:41.023 [2024-10-01 14:31:32.659239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.986 [2024-10-01 14:31:33.577507] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:06:41.986 [2024-10-01 14:31:33.577572] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:41.986 [2024-10-01 14:31:33.577767] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:41.986 "name": "raid_bdev1", 00:06:41.986 "uuid": "e8cbbb0d-59be-4f68-a708-71cc12a0977d", 00:06:41.986 "strip_size_kb": 0, 00:06:41.986 "state": "online", 00:06:41.986 "raid_level": "raid1", 00:06:41.986 "superblock": true, 00:06:41.986 "num_base_bdevs": 2, 00:06:41.986 "num_base_bdevs_discovered": 1, 00:06:41.986 "num_base_bdevs_operational": 1, 00:06:41.986 "base_bdevs_list": [ 00:06:41.986 { 00:06:41.986 "name": null, 00:06:41.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:41.986 "is_configured": false, 00:06:41.986 "data_offset": 0, 00:06:41.986 "data_size": 63488 00:06:41.986 }, 00:06:41.986 { 00:06:41.986 "name": "BaseBdev2", 00:06:41.986 "uuid": "21ce7ce3-43ef-54fa-8fcc-9d8621bc8cb0", 00:06:41.986 "is_configured": true, 00:06:41.986 "data_offset": 2048, 00:06:41.986 "data_size": 63488 00:06:41.986 } 00:06:41.986 ] 00:06:41.986 }' 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:41.986 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.246 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:42.246 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.246 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.246 [2024-10-01 14:31:33.912271] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:42.246 [2024-10-01 14:31:33.912304] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:42.246 [2024-10-01 14:31:33.915297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:42.246 [2024-10-01 14:31:33.915338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.246 [2024-10-01 14:31:33.915394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:42.246 [2024-10-01 14:31:33.915403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:42.246 { 00:06:42.246 "results": [ 00:06:42.246 { 00:06:42.246 "job": "raid_bdev1", 00:06:42.246 "core_mask": "0x1", 00:06:42.246 "workload": "randrw", 00:06:42.246 "percentage": 50, 00:06:42.246 "status": "finished", 00:06:42.246 "queue_depth": 1, 00:06:42.246 "io_size": 131072, 00:06:42.246 "runtime": 1.251109, 00:06:42.246 "iops": 19278.096472809324, 00:06:42.246 "mibps": 2409.7620591011655, 00:06:42.246 "io_failed": 0, 00:06:42.246 "io_timeout": 0, 00:06:42.246 "avg_latency_us": 48.72736578567169, 00:06:42.246 "min_latency_us": 28.553846153846155, 00:06:42.246 "max_latency_us": 1676.2092307692308 00:06:42.246 } 00:06:42.246 ], 00:06:42.246 "core_count": 1 00:06:42.246 } 00:06:42.246 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.246 14:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62431 00:06:42.246 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62431 ']' 00:06:42.246 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62431 00:06:42.246 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:06:42.246 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.246 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62431 00:06:42.507 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.507 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.507 killing process with pid 62431 00:06:42.507 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62431' 00:06:42.507 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62431 00:06:42.507 14:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62431 00:06:42.507 [2024-10-01 14:31:33.940724] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:42.507 [2024-10-01 14:31:34.030108] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:43.450 14:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2H0J0b5P4W 00:06:43.450 14:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:43.450 14:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:43.450 14:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:06:43.450 14:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:06:43.450 14:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:43.450 14:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:06:43.450 14:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:06:43.450 00:06:43.450 real 0m3.715s 00:06:43.450 user 0m4.390s 00:06:43.450 sys 0m0.426s 00:06:43.450 ************************************ 00:06:43.450 END TEST raid_write_error_test 00:06:43.450 ************************************ 00:06:43.450 14:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.450 14:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.450 14:31:34 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:43.450 14:31:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:43.450 14:31:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:06:43.450 14:31:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:43.450 14:31:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.450 14:31:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:43.450 ************************************ 00:06:43.450 START TEST raid_state_function_test 00:06:43.450 ************************************ 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:43.450 Process raid pid: 62558 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62558 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62558' 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62558 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62558 ']' 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.450 14:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:43.450 [2024-10-01 14:31:35.049136] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:43.450 [2024-10-01 14:31:35.049249] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.711 [2024-10-01 14:31:35.197812] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.973 [2024-10-01 14:31:35.401910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.973 [2024-10-01 14:31:35.543438] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.973 [2024-10-01 14:31:35.543485] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.318 [2024-10-01 14:31:35.899731] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:44.318 [2024-10-01 14:31:35.899792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:44.318 [2024-10-01 14:31:35.899802] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:44.318 [2024-10-01 14:31:35.899811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:44.318 [2024-10-01 14:31:35.899818] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:44.318 [2024-10-01 14:31:35.899828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.318 "name": "Existed_Raid", 00:06:44.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.318 "strip_size_kb": 64, 00:06:44.318 "state": "configuring", 00:06:44.318 "raid_level": "raid0", 00:06:44.318 "superblock": false, 00:06:44.318 "num_base_bdevs": 3, 00:06:44.318 "num_base_bdevs_discovered": 0, 00:06:44.318 "num_base_bdevs_operational": 3, 00:06:44.318 "base_bdevs_list": [ 00:06:44.318 { 00:06:44.318 "name": "BaseBdev1", 00:06:44.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.318 "is_configured": false, 00:06:44.318 "data_offset": 0, 00:06:44.318 "data_size": 0 00:06:44.318 }, 00:06:44.318 { 00:06:44.318 "name": "BaseBdev2", 00:06:44.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.318 "is_configured": false, 00:06:44.318 "data_offset": 0, 00:06:44.318 "data_size": 0 00:06:44.318 }, 00:06:44.318 { 00:06:44.318 "name": "BaseBdev3", 00:06:44.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.318 "is_configured": false, 00:06:44.318 "data_offset": 0, 00:06:44.318 "data_size": 0 00:06:44.318 } 00:06:44.318 ] 00:06:44.318 }' 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.318 14:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.595 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:44.595 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.595 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.595 [2024-10-01 14:31:36.251727] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:44.595 [2024-10-01 14:31:36.251770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:44.595 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.595 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:44.595 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.595 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.595 [2024-10-01 14:31:36.263757] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:44.595 [2024-10-01 14:31:36.263814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:44.595 [2024-10-01 14:31:36.263823] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:44.595 [2024-10-01 14:31:36.263833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:44.595 [2024-10-01 14:31:36.263840] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:44.595 [2024-10-01 14:31:36.263849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:44.595 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.595 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:44.595 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.595 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.857 [2024-10-01 14:31:36.310371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:44.857 BaseBdev1 00:06:44.857 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.857 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:44.857 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:44.857 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:44.857 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:44.857 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:44.857 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:44.857 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.858 [ 00:06:44.858 { 00:06:44.858 "name": "BaseBdev1", 00:06:44.858 "aliases": [ 00:06:44.858 "caefa8b9-0a5e-4460-875a-ed8c3503091c" 00:06:44.858 ], 00:06:44.858 "product_name": "Malloc disk", 00:06:44.858 "block_size": 512, 00:06:44.858 "num_blocks": 65536, 00:06:44.858 "uuid": "caefa8b9-0a5e-4460-875a-ed8c3503091c", 00:06:44.858 "assigned_rate_limits": { 00:06:44.858 "rw_ios_per_sec": 0, 00:06:44.858 "rw_mbytes_per_sec": 0, 00:06:44.858 "r_mbytes_per_sec": 0, 00:06:44.858 "w_mbytes_per_sec": 0 00:06:44.858 }, 00:06:44.858 "claimed": true, 00:06:44.858 "claim_type": "exclusive_write", 00:06:44.858 "zoned": false, 00:06:44.858 "supported_io_types": { 00:06:44.858 "read": true, 00:06:44.858 "write": true, 00:06:44.858 "unmap": true, 00:06:44.858 "flush": true, 00:06:44.858 "reset": true, 00:06:44.858 "nvme_admin": false, 00:06:44.858 "nvme_io": false, 00:06:44.858 "nvme_io_md": false, 00:06:44.858 "write_zeroes": true, 00:06:44.858 "zcopy": true, 00:06:44.858 "get_zone_info": false, 00:06:44.858 "zone_management": false, 00:06:44.858 "zone_append": false, 00:06:44.858 "compare": false, 00:06:44.858 "compare_and_write": false, 00:06:44.858 "abort": true, 00:06:44.858 "seek_hole": false, 00:06:44.858 "seek_data": false, 00:06:44.858 "copy": true, 00:06:44.858 "nvme_iov_md": false 00:06:44.858 }, 00:06:44.858 "memory_domains": [ 00:06:44.858 { 00:06:44.858 "dma_device_id": "system", 00:06:44.858 "dma_device_type": 1 00:06:44.858 }, 00:06:44.858 { 00:06:44.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.858 "dma_device_type": 2 00:06:44.858 } 00:06:44.858 ], 00:06:44.858 "driver_specific": {} 00:06:44.858 } 00:06:44.858 ] 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.858 "name": "Existed_Raid", 00:06:44.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.858 "strip_size_kb": 64, 00:06:44.858 "state": "configuring", 00:06:44.858 "raid_level": "raid0", 00:06:44.858 "superblock": false, 00:06:44.858 "num_base_bdevs": 3, 00:06:44.858 "num_base_bdevs_discovered": 1, 00:06:44.858 "num_base_bdevs_operational": 3, 00:06:44.858 "base_bdevs_list": [ 00:06:44.858 { 00:06:44.858 "name": "BaseBdev1", 00:06:44.858 "uuid": "caefa8b9-0a5e-4460-875a-ed8c3503091c", 00:06:44.858 "is_configured": true, 00:06:44.858 "data_offset": 0, 00:06:44.858 "data_size": 65536 00:06:44.858 }, 00:06:44.858 { 00:06:44.858 "name": "BaseBdev2", 00:06:44.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.858 "is_configured": false, 00:06:44.858 "data_offset": 0, 00:06:44.858 "data_size": 0 00:06:44.858 }, 00:06:44.858 { 00:06:44.858 "name": "BaseBdev3", 00:06:44.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.858 "is_configured": false, 00:06:44.858 "data_offset": 0, 00:06:44.858 "data_size": 0 00:06:44.858 } 00:06:44.858 ] 00:06:44.858 }' 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.858 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.118 [2024-10-01 14:31:36.642507] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:45.118 [2024-10-01 14:31:36.642568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.118 [2024-10-01 14:31:36.654560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:45.118 [2024-10-01 14:31:36.656459] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:45.118 [2024-10-01 14:31:36.656514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:45.118 [2024-10-01 14:31:36.656525] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:45.118 [2024-10-01 14:31:36.656535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.118 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.118 "name": "Existed_Raid", 00:06:45.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.118 "strip_size_kb": 64, 00:06:45.118 "state": "configuring", 00:06:45.118 "raid_level": "raid0", 00:06:45.118 "superblock": false, 00:06:45.118 "num_base_bdevs": 3, 00:06:45.118 "num_base_bdevs_discovered": 1, 00:06:45.118 "num_base_bdevs_operational": 3, 00:06:45.118 "base_bdevs_list": [ 00:06:45.118 { 00:06:45.118 "name": "BaseBdev1", 00:06:45.118 "uuid": "caefa8b9-0a5e-4460-875a-ed8c3503091c", 00:06:45.118 "is_configured": true, 00:06:45.118 "data_offset": 0, 00:06:45.118 "data_size": 65536 00:06:45.118 }, 00:06:45.118 { 00:06:45.118 "name": "BaseBdev2", 00:06:45.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.118 "is_configured": false, 00:06:45.118 "data_offset": 0, 00:06:45.118 "data_size": 0 00:06:45.118 }, 00:06:45.118 { 00:06:45.118 "name": "BaseBdev3", 00:06:45.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.119 "is_configured": false, 00:06:45.119 "data_offset": 0, 00:06:45.119 "data_size": 0 00:06:45.119 } 00:06:45.119 ] 00:06:45.119 }' 00:06:45.119 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.119 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.381 14:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:45.381 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.381 14:31:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.381 [2024-10-01 14:31:37.013861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:45.381 BaseBdev2 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.381 [ 00:06:45.381 { 00:06:45.381 "name": "BaseBdev2", 00:06:45.381 "aliases": [ 00:06:45.381 "62c5ee4d-6032-4444-8c06-a5e9df5382b6" 00:06:45.381 ], 00:06:45.381 "product_name": "Malloc disk", 00:06:45.381 "block_size": 512, 00:06:45.381 "num_blocks": 65536, 00:06:45.381 "uuid": "62c5ee4d-6032-4444-8c06-a5e9df5382b6", 00:06:45.381 "assigned_rate_limits": { 00:06:45.381 "rw_ios_per_sec": 0, 00:06:45.381 "rw_mbytes_per_sec": 0, 00:06:45.381 "r_mbytes_per_sec": 0, 00:06:45.381 "w_mbytes_per_sec": 0 00:06:45.381 }, 00:06:45.381 "claimed": true, 00:06:45.381 "claim_type": "exclusive_write", 00:06:45.381 "zoned": false, 00:06:45.381 "supported_io_types": { 00:06:45.381 "read": true, 00:06:45.381 "write": true, 00:06:45.381 "unmap": true, 00:06:45.381 "flush": true, 00:06:45.381 "reset": true, 00:06:45.381 "nvme_admin": false, 00:06:45.381 "nvme_io": false, 00:06:45.381 "nvme_io_md": false, 00:06:45.381 "write_zeroes": true, 00:06:45.381 "zcopy": true, 00:06:45.381 "get_zone_info": false, 00:06:45.381 "zone_management": false, 00:06:45.381 "zone_append": false, 00:06:45.381 "compare": false, 00:06:45.381 "compare_and_write": false, 00:06:45.381 "abort": true, 00:06:45.381 "seek_hole": false, 00:06:45.381 "seek_data": false, 00:06:45.381 "copy": true, 00:06:45.381 "nvme_iov_md": false 00:06:45.381 }, 00:06:45.381 "memory_domains": [ 00:06:45.381 { 00:06:45.381 "dma_device_id": "system", 00:06:45.381 "dma_device_type": 1 00:06:45.381 }, 00:06:45.381 { 00:06:45.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.381 "dma_device_type": 2 00:06:45.381 } 00:06:45.381 ], 00:06:45.381 "driver_specific": {} 00:06:45.381 } 00:06:45.381 ] 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.381 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.642 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.642 "name": "Existed_Raid", 00:06:45.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.642 "strip_size_kb": 64, 00:06:45.642 "state": "configuring", 00:06:45.642 "raid_level": "raid0", 00:06:45.642 "superblock": false, 00:06:45.642 "num_base_bdevs": 3, 00:06:45.642 "num_base_bdevs_discovered": 2, 00:06:45.642 "num_base_bdevs_operational": 3, 00:06:45.642 "base_bdevs_list": [ 00:06:45.642 { 00:06:45.642 "name": "BaseBdev1", 00:06:45.642 "uuid": "caefa8b9-0a5e-4460-875a-ed8c3503091c", 00:06:45.642 "is_configured": true, 00:06:45.642 "data_offset": 0, 00:06:45.642 "data_size": 65536 00:06:45.642 }, 00:06:45.642 { 00:06:45.642 "name": "BaseBdev2", 00:06:45.642 "uuid": "62c5ee4d-6032-4444-8c06-a5e9df5382b6", 00:06:45.642 "is_configured": true, 00:06:45.642 "data_offset": 0, 00:06:45.642 "data_size": 65536 00:06:45.642 }, 00:06:45.642 { 00:06:45.642 "name": "BaseBdev3", 00:06:45.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.642 "is_configured": false, 00:06:45.642 "data_offset": 0, 00:06:45.642 "data_size": 0 00:06:45.642 } 00:06:45.642 ] 00:06:45.642 }' 00:06:45.642 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.642 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.900 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:06:45.900 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.900 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.900 [2024-10-01 14:31:37.397505] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:45.900 [2024-10-01 14:31:37.397561] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:45.900 [2024-10-01 14:31:37.397575] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:45.900 [2024-10-01 14:31:37.397858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:45.900 [2024-10-01 14:31:37.398002] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:45.900 [2024-10-01 14:31:37.398014] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:45.900 [2024-10-01 14:31:37.398255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.900 BaseBdev3 00:06:45.900 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.900 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.901 [ 00:06:45.901 { 00:06:45.901 "name": "BaseBdev3", 00:06:45.901 "aliases": [ 00:06:45.901 "1638665c-5750-48dc-8a10-1220b07c131b" 00:06:45.901 ], 00:06:45.901 "product_name": "Malloc disk", 00:06:45.901 "block_size": 512, 00:06:45.901 "num_blocks": 65536, 00:06:45.901 "uuid": "1638665c-5750-48dc-8a10-1220b07c131b", 00:06:45.901 "assigned_rate_limits": { 00:06:45.901 "rw_ios_per_sec": 0, 00:06:45.901 "rw_mbytes_per_sec": 0, 00:06:45.901 "r_mbytes_per_sec": 0, 00:06:45.901 "w_mbytes_per_sec": 0 00:06:45.901 }, 00:06:45.901 "claimed": true, 00:06:45.901 "claim_type": "exclusive_write", 00:06:45.901 "zoned": false, 00:06:45.901 "supported_io_types": { 00:06:45.901 "read": true, 00:06:45.901 "write": true, 00:06:45.901 "unmap": true, 00:06:45.901 "flush": true, 00:06:45.901 "reset": true, 00:06:45.901 "nvme_admin": false, 00:06:45.901 "nvme_io": false, 00:06:45.901 "nvme_io_md": false, 00:06:45.901 "write_zeroes": true, 00:06:45.901 "zcopy": true, 00:06:45.901 "get_zone_info": false, 00:06:45.901 "zone_management": false, 00:06:45.901 "zone_append": false, 00:06:45.901 "compare": false, 00:06:45.901 "compare_and_write": false, 00:06:45.901 "abort": true, 00:06:45.901 "seek_hole": false, 00:06:45.901 "seek_data": false, 00:06:45.901 "copy": true, 00:06:45.901 "nvme_iov_md": false 00:06:45.901 }, 00:06:45.901 "memory_domains": [ 00:06:45.901 { 00:06:45.901 "dma_device_id": "system", 00:06:45.901 "dma_device_type": 1 00:06:45.901 }, 00:06:45.901 { 00:06:45.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.901 "dma_device_type": 2 00:06:45.901 } 00:06:45.901 ], 00:06:45.901 "driver_specific": {} 00:06:45.901 } 00:06:45.901 ] 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.901 "name": "Existed_Raid", 00:06:45.901 "uuid": "6e7becf7-90f4-4075-bb6c-a1f1f857809c", 00:06:45.901 "strip_size_kb": 64, 00:06:45.901 "state": "online", 00:06:45.901 "raid_level": "raid0", 00:06:45.901 "superblock": false, 00:06:45.901 "num_base_bdevs": 3, 00:06:45.901 "num_base_bdevs_discovered": 3, 00:06:45.901 "num_base_bdevs_operational": 3, 00:06:45.901 "base_bdevs_list": [ 00:06:45.901 { 00:06:45.901 "name": "BaseBdev1", 00:06:45.901 "uuid": "caefa8b9-0a5e-4460-875a-ed8c3503091c", 00:06:45.901 "is_configured": true, 00:06:45.901 "data_offset": 0, 00:06:45.901 "data_size": 65536 00:06:45.901 }, 00:06:45.901 { 00:06:45.901 "name": "BaseBdev2", 00:06:45.901 "uuid": "62c5ee4d-6032-4444-8c06-a5e9df5382b6", 00:06:45.901 "is_configured": true, 00:06:45.901 "data_offset": 0, 00:06:45.901 "data_size": 65536 00:06:45.901 }, 00:06:45.901 { 00:06:45.901 "name": "BaseBdev3", 00:06:45.901 "uuid": "1638665c-5750-48dc-8a10-1220b07c131b", 00:06:45.901 "is_configured": true, 00:06:45.901 "data_offset": 0, 00:06:45.901 "data_size": 65536 00:06:45.901 } 00:06:45.901 ] 00:06:45.901 }' 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.901 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.163 [2024-10-01 14:31:37.750006] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:46.163 "name": "Existed_Raid", 00:06:46.163 "aliases": [ 00:06:46.163 "6e7becf7-90f4-4075-bb6c-a1f1f857809c" 00:06:46.163 ], 00:06:46.163 "product_name": "Raid Volume", 00:06:46.163 "block_size": 512, 00:06:46.163 "num_blocks": 196608, 00:06:46.163 "uuid": "6e7becf7-90f4-4075-bb6c-a1f1f857809c", 00:06:46.163 "assigned_rate_limits": { 00:06:46.163 "rw_ios_per_sec": 0, 00:06:46.163 "rw_mbytes_per_sec": 0, 00:06:46.163 "r_mbytes_per_sec": 0, 00:06:46.163 "w_mbytes_per_sec": 0 00:06:46.163 }, 00:06:46.163 "claimed": false, 00:06:46.163 "zoned": false, 00:06:46.163 "supported_io_types": { 00:06:46.163 "read": true, 00:06:46.163 "write": true, 00:06:46.163 "unmap": true, 00:06:46.163 "flush": true, 00:06:46.163 "reset": true, 00:06:46.163 "nvme_admin": false, 00:06:46.163 "nvme_io": false, 00:06:46.163 "nvme_io_md": false, 00:06:46.163 "write_zeroes": true, 00:06:46.163 "zcopy": false, 00:06:46.163 "get_zone_info": false, 00:06:46.163 "zone_management": false, 00:06:46.163 "zone_append": false, 00:06:46.163 "compare": false, 00:06:46.163 "compare_and_write": false, 00:06:46.163 "abort": false, 00:06:46.163 "seek_hole": false, 00:06:46.163 "seek_data": false, 00:06:46.163 "copy": false, 00:06:46.163 "nvme_iov_md": false 00:06:46.163 }, 00:06:46.163 "memory_domains": [ 00:06:46.163 { 00:06:46.163 "dma_device_id": "system", 00:06:46.163 "dma_device_type": 1 00:06:46.163 }, 00:06:46.163 { 00:06:46.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.163 "dma_device_type": 2 00:06:46.163 }, 00:06:46.163 { 00:06:46.163 "dma_device_id": "system", 00:06:46.163 "dma_device_type": 1 00:06:46.163 }, 00:06:46.163 { 00:06:46.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.163 "dma_device_type": 2 00:06:46.163 }, 00:06:46.163 { 00:06:46.163 "dma_device_id": "system", 00:06:46.163 "dma_device_type": 1 00:06:46.163 }, 00:06:46.163 { 00:06:46.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.163 "dma_device_type": 2 00:06:46.163 } 00:06:46.163 ], 00:06:46.163 "driver_specific": { 00:06:46.163 "raid": { 00:06:46.163 "uuid": "6e7becf7-90f4-4075-bb6c-a1f1f857809c", 00:06:46.163 "strip_size_kb": 64, 00:06:46.163 "state": "online", 00:06:46.163 "raid_level": "raid0", 00:06:46.163 "superblock": false, 00:06:46.163 "num_base_bdevs": 3, 00:06:46.163 "num_base_bdevs_discovered": 3, 00:06:46.163 "num_base_bdevs_operational": 3, 00:06:46.163 "base_bdevs_list": [ 00:06:46.163 { 00:06:46.163 "name": "BaseBdev1", 00:06:46.163 "uuid": "caefa8b9-0a5e-4460-875a-ed8c3503091c", 00:06:46.163 "is_configured": true, 00:06:46.163 "data_offset": 0, 00:06:46.163 "data_size": 65536 00:06:46.163 }, 00:06:46.163 { 00:06:46.163 "name": "BaseBdev2", 00:06:46.163 "uuid": "62c5ee4d-6032-4444-8c06-a5e9df5382b6", 00:06:46.163 "is_configured": true, 00:06:46.163 "data_offset": 0, 00:06:46.163 "data_size": 65536 00:06:46.163 }, 00:06:46.163 { 00:06:46.163 "name": "BaseBdev3", 00:06:46.163 "uuid": "1638665c-5750-48dc-8a10-1220b07c131b", 00:06:46.163 "is_configured": true, 00:06:46.163 "data_offset": 0, 00:06:46.163 "data_size": 65536 00:06:46.163 } 00:06:46.163 ] 00:06:46.163 } 00:06:46.163 } 00:06:46.163 }' 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:46.163 BaseBdev2 00:06:46.163 BaseBdev3' 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:46.163 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:46.164 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.164 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.423 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:46.424 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:46.424 14:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:46.424 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.424 14:31:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.424 [2024-10-01 14:31:37.937765] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:46.424 [2024-10-01 14:31:37.937800] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:46.424 [2024-10-01 14:31:37.937866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:46.424 "name": "Existed_Raid", 00:06:46.424 "uuid": "6e7becf7-90f4-4075-bb6c-a1f1f857809c", 00:06:46.424 "strip_size_kb": 64, 00:06:46.424 "state": "offline", 00:06:46.424 "raid_level": "raid0", 00:06:46.424 "superblock": false, 00:06:46.424 "num_base_bdevs": 3, 00:06:46.424 "num_base_bdevs_discovered": 2, 00:06:46.424 "num_base_bdevs_operational": 2, 00:06:46.424 "base_bdevs_list": [ 00:06:46.424 { 00:06:46.424 "name": null, 00:06:46.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:46.424 "is_configured": false, 00:06:46.424 "data_offset": 0, 00:06:46.424 "data_size": 65536 00:06:46.424 }, 00:06:46.424 { 00:06:46.424 "name": "BaseBdev2", 00:06:46.424 "uuid": "62c5ee4d-6032-4444-8c06-a5e9df5382b6", 00:06:46.424 "is_configured": true, 00:06:46.424 "data_offset": 0, 00:06:46.424 "data_size": 65536 00:06:46.424 }, 00:06:46.424 { 00:06:46.424 "name": "BaseBdev3", 00:06:46.424 "uuid": "1638665c-5750-48dc-8a10-1220b07c131b", 00:06:46.424 "is_configured": true, 00:06:46.424 "data_offset": 0, 00:06:46.424 "data_size": 65536 00:06:46.424 } 00:06:46.424 ] 00:06:46.424 }' 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:46.424 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.993 [2024-10-01 14:31:38.415518] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:46.993 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.994 [2024-10-01 14:31:38.519854] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:06:46.994 [2024-10-01 14:31:38.519917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.994 BaseBdev2 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.994 [ 00:06:46.994 { 00:06:46.994 "name": "BaseBdev2", 00:06:46.994 "aliases": [ 00:06:46.994 "908e153f-527e-4526-a06f-06c1db943497" 00:06:46.994 ], 00:06:46.994 "product_name": "Malloc disk", 00:06:46.994 "block_size": 512, 00:06:46.994 "num_blocks": 65536, 00:06:46.994 "uuid": "908e153f-527e-4526-a06f-06c1db943497", 00:06:46.994 "assigned_rate_limits": { 00:06:46.994 "rw_ios_per_sec": 0, 00:06:46.994 "rw_mbytes_per_sec": 0, 00:06:46.994 "r_mbytes_per_sec": 0, 00:06:46.994 "w_mbytes_per_sec": 0 00:06:46.994 }, 00:06:46.994 "claimed": false, 00:06:46.994 "zoned": false, 00:06:46.994 "supported_io_types": { 00:06:46.994 "read": true, 00:06:46.994 "write": true, 00:06:46.994 "unmap": true, 00:06:46.994 "flush": true, 00:06:46.994 "reset": true, 00:06:46.994 "nvme_admin": false, 00:06:46.994 "nvme_io": false, 00:06:46.994 "nvme_io_md": false, 00:06:46.994 "write_zeroes": true, 00:06:46.994 "zcopy": true, 00:06:46.994 "get_zone_info": false, 00:06:46.994 "zone_management": false, 00:06:46.994 "zone_append": false, 00:06:46.994 "compare": false, 00:06:46.994 "compare_and_write": false, 00:06:46.994 "abort": true, 00:06:46.994 "seek_hole": false, 00:06:46.994 "seek_data": false, 00:06:46.994 "copy": true, 00:06:46.994 "nvme_iov_md": false 00:06:46.994 }, 00:06:46.994 "memory_domains": [ 00:06:46.994 { 00:06:46.994 "dma_device_id": "system", 00:06:46.994 "dma_device_type": 1 00:06:46.994 }, 00:06:46.994 { 00:06:46.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.994 "dma_device_type": 2 00:06:46.994 } 00:06:46.994 ], 00:06:46.994 "driver_specific": {} 00:06:46.994 } 00:06:46.994 ] 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.994 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.252 BaseBdev3 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.252 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.252 [ 00:06:47.252 { 00:06:47.252 "name": "BaseBdev3", 00:06:47.252 "aliases": [ 00:06:47.252 "1d939061-9cdf-4692-ab94-97b84e50a8cc" 00:06:47.252 ], 00:06:47.252 "product_name": "Malloc disk", 00:06:47.252 "block_size": 512, 00:06:47.252 "num_blocks": 65536, 00:06:47.252 "uuid": "1d939061-9cdf-4692-ab94-97b84e50a8cc", 00:06:47.252 "assigned_rate_limits": { 00:06:47.252 "rw_ios_per_sec": 0, 00:06:47.252 "rw_mbytes_per_sec": 0, 00:06:47.252 "r_mbytes_per_sec": 0, 00:06:47.252 "w_mbytes_per_sec": 0 00:06:47.252 }, 00:06:47.252 "claimed": false, 00:06:47.252 "zoned": false, 00:06:47.252 "supported_io_types": { 00:06:47.252 "read": true, 00:06:47.252 "write": true, 00:06:47.252 "unmap": true, 00:06:47.252 "flush": true, 00:06:47.252 "reset": true, 00:06:47.253 "nvme_admin": false, 00:06:47.253 "nvme_io": false, 00:06:47.253 "nvme_io_md": false, 00:06:47.253 "write_zeroes": true, 00:06:47.253 "zcopy": true, 00:06:47.253 "get_zone_info": false, 00:06:47.253 "zone_management": false, 00:06:47.253 "zone_append": false, 00:06:47.253 "compare": false, 00:06:47.253 "compare_and_write": false, 00:06:47.253 "abort": true, 00:06:47.253 "seek_hole": false, 00:06:47.253 "seek_data": false, 00:06:47.253 "copy": true, 00:06:47.253 "nvme_iov_md": false 00:06:47.253 }, 00:06:47.253 "memory_domains": [ 00:06:47.253 { 00:06:47.253 "dma_device_id": "system", 00:06:47.253 "dma_device_type": 1 00:06:47.253 }, 00:06:47.253 { 00:06:47.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.253 "dma_device_type": 2 00:06:47.253 } 00:06:47.253 ], 00:06:47.253 "driver_specific": {} 00:06:47.253 } 00:06:47.253 ] 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.253 [2024-10-01 14:31:38.734272] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:47.253 [2024-10-01 14:31:38.734335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:47.253 [2024-10-01 14:31:38.734367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:47.253 [2024-10-01 14:31:38.736421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:47.253 "name": "Existed_Raid", 00:06:47.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.253 "strip_size_kb": 64, 00:06:47.253 "state": "configuring", 00:06:47.253 "raid_level": "raid0", 00:06:47.253 "superblock": false, 00:06:47.253 "num_base_bdevs": 3, 00:06:47.253 "num_base_bdevs_discovered": 2, 00:06:47.253 "num_base_bdevs_operational": 3, 00:06:47.253 "base_bdevs_list": [ 00:06:47.253 { 00:06:47.253 "name": "BaseBdev1", 00:06:47.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.253 "is_configured": false, 00:06:47.253 "data_offset": 0, 00:06:47.253 "data_size": 0 00:06:47.253 }, 00:06:47.253 { 00:06:47.253 "name": "BaseBdev2", 00:06:47.253 "uuid": "908e153f-527e-4526-a06f-06c1db943497", 00:06:47.253 "is_configured": true, 00:06:47.253 "data_offset": 0, 00:06:47.253 "data_size": 65536 00:06:47.253 }, 00:06:47.253 { 00:06:47.253 "name": "BaseBdev3", 00:06:47.253 "uuid": "1d939061-9cdf-4692-ab94-97b84e50a8cc", 00:06:47.253 "is_configured": true, 00:06:47.253 "data_offset": 0, 00:06:47.253 "data_size": 65536 00:06:47.253 } 00:06:47.253 ] 00:06:47.253 }' 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:47.253 14:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.522 [2024-10-01 14:31:39.058288] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:47.522 "name": "Existed_Raid", 00:06:47.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.522 "strip_size_kb": 64, 00:06:47.522 "state": "configuring", 00:06:47.522 "raid_level": "raid0", 00:06:47.522 "superblock": false, 00:06:47.522 "num_base_bdevs": 3, 00:06:47.522 "num_base_bdevs_discovered": 1, 00:06:47.522 "num_base_bdevs_operational": 3, 00:06:47.522 "base_bdevs_list": [ 00:06:47.522 { 00:06:47.522 "name": "BaseBdev1", 00:06:47.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.522 "is_configured": false, 00:06:47.522 "data_offset": 0, 00:06:47.522 "data_size": 0 00:06:47.522 }, 00:06:47.522 { 00:06:47.522 "name": null, 00:06:47.522 "uuid": "908e153f-527e-4526-a06f-06c1db943497", 00:06:47.522 "is_configured": false, 00:06:47.522 "data_offset": 0, 00:06:47.522 "data_size": 65536 00:06:47.522 }, 00:06:47.522 { 00:06:47.522 "name": "BaseBdev3", 00:06:47.522 "uuid": "1d939061-9cdf-4692-ab94-97b84e50a8cc", 00:06:47.522 "is_configured": true, 00:06:47.522 "data_offset": 0, 00:06:47.522 "data_size": 65536 00:06:47.522 } 00:06:47.522 ] 00:06:47.522 }' 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:47.522 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.781 [2024-10-01 14:31:39.433223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:47.781 BaseBdev1 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.781 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.781 [ 00:06:47.781 { 00:06:47.781 "name": "BaseBdev1", 00:06:47.781 "aliases": [ 00:06:47.781 "5763a12a-909a-418d-be4b-5d59d5d46f96" 00:06:47.781 ], 00:06:47.781 "product_name": "Malloc disk", 00:06:47.781 "block_size": 512, 00:06:47.781 "num_blocks": 65536, 00:06:47.781 "uuid": "5763a12a-909a-418d-be4b-5d59d5d46f96", 00:06:47.781 "assigned_rate_limits": { 00:06:47.781 "rw_ios_per_sec": 0, 00:06:47.781 "rw_mbytes_per_sec": 0, 00:06:47.781 "r_mbytes_per_sec": 0, 00:06:47.781 "w_mbytes_per_sec": 0 00:06:47.781 }, 00:06:47.781 "claimed": true, 00:06:47.781 "claim_type": "exclusive_write", 00:06:47.781 "zoned": false, 00:06:47.781 "supported_io_types": { 00:06:47.781 "read": true, 00:06:47.781 "write": true, 00:06:47.781 "unmap": true, 00:06:47.781 "flush": true, 00:06:47.781 "reset": true, 00:06:47.781 "nvme_admin": false, 00:06:47.781 "nvme_io": false, 00:06:47.781 "nvme_io_md": false, 00:06:47.781 "write_zeroes": true, 00:06:47.781 "zcopy": true, 00:06:47.781 "get_zone_info": false, 00:06:47.781 "zone_management": false, 00:06:47.781 "zone_append": false, 00:06:47.781 "compare": false, 00:06:47.781 "compare_and_write": false, 00:06:47.781 "abort": true, 00:06:47.781 "seek_hole": false, 00:06:47.781 "seek_data": false, 00:06:48.040 "copy": true, 00:06:48.040 "nvme_iov_md": false 00:06:48.040 }, 00:06:48.040 "memory_domains": [ 00:06:48.040 { 00:06:48.040 "dma_device_id": "system", 00:06:48.040 "dma_device_type": 1 00:06:48.040 }, 00:06:48.040 { 00:06:48.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.040 "dma_device_type": 2 00:06:48.040 } 00:06:48.040 ], 00:06:48.040 "driver_specific": {} 00:06:48.040 } 00:06:48.040 ] 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.040 "name": "Existed_Raid", 00:06:48.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.040 "strip_size_kb": 64, 00:06:48.040 "state": "configuring", 00:06:48.040 "raid_level": "raid0", 00:06:48.040 "superblock": false, 00:06:48.040 "num_base_bdevs": 3, 00:06:48.040 "num_base_bdevs_discovered": 2, 00:06:48.040 "num_base_bdevs_operational": 3, 00:06:48.040 "base_bdevs_list": [ 00:06:48.040 { 00:06:48.040 "name": "BaseBdev1", 00:06:48.040 "uuid": "5763a12a-909a-418d-be4b-5d59d5d46f96", 00:06:48.040 "is_configured": true, 00:06:48.040 "data_offset": 0, 00:06:48.040 "data_size": 65536 00:06:48.040 }, 00:06:48.040 { 00:06:48.040 "name": null, 00:06:48.040 "uuid": "908e153f-527e-4526-a06f-06c1db943497", 00:06:48.040 "is_configured": false, 00:06:48.040 "data_offset": 0, 00:06:48.040 "data_size": 65536 00:06:48.040 }, 00:06:48.040 { 00:06:48.040 "name": "BaseBdev3", 00:06:48.040 "uuid": "1d939061-9cdf-4692-ab94-97b84e50a8cc", 00:06:48.040 "is_configured": true, 00:06:48.040 "data_offset": 0, 00:06:48.040 "data_size": 65536 00:06:48.040 } 00:06:48.040 ] 00:06:48.040 }' 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.040 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.299 [2024-10-01 14:31:39.817398] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.299 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.300 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.300 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.300 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:48.300 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.300 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.300 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.300 "name": "Existed_Raid", 00:06:48.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.300 "strip_size_kb": 64, 00:06:48.300 "state": "configuring", 00:06:48.300 "raid_level": "raid0", 00:06:48.300 "superblock": false, 00:06:48.300 "num_base_bdevs": 3, 00:06:48.300 "num_base_bdevs_discovered": 1, 00:06:48.300 "num_base_bdevs_operational": 3, 00:06:48.300 "base_bdevs_list": [ 00:06:48.300 { 00:06:48.300 "name": "BaseBdev1", 00:06:48.300 "uuid": "5763a12a-909a-418d-be4b-5d59d5d46f96", 00:06:48.300 "is_configured": true, 00:06:48.300 "data_offset": 0, 00:06:48.300 "data_size": 65536 00:06:48.300 }, 00:06:48.300 { 00:06:48.300 "name": null, 00:06:48.300 "uuid": "908e153f-527e-4526-a06f-06c1db943497", 00:06:48.300 "is_configured": false, 00:06:48.300 "data_offset": 0, 00:06:48.300 "data_size": 65536 00:06:48.300 }, 00:06:48.300 { 00:06:48.300 "name": null, 00:06:48.300 "uuid": "1d939061-9cdf-4692-ab94-97b84e50a8cc", 00:06:48.300 "is_configured": false, 00:06:48.300 "data_offset": 0, 00:06:48.300 "data_size": 65536 00:06:48.300 } 00:06:48.300 ] 00:06:48.300 }' 00:06:48.300 14:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.300 14:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.558 [2024-10-01 14:31:40.165525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.558 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.558 "name": "Existed_Raid", 00:06:48.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.558 "strip_size_kb": 64, 00:06:48.558 "state": "configuring", 00:06:48.558 "raid_level": "raid0", 00:06:48.558 "superblock": false, 00:06:48.558 "num_base_bdevs": 3, 00:06:48.558 "num_base_bdevs_discovered": 2, 00:06:48.558 "num_base_bdevs_operational": 3, 00:06:48.559 "base_bdevs_list": [ 00:06:48.559 { 00:06:48.559 "name": "BaseBdev1", 00:06:48.559 "uuid": "5763a12a-909a-418d-be4b-5d59d5d46f96", 00:06:48.559 "is_configured": true, 00:06:48.559 "data_offset": 0, 00:06:48.559 "data_size": 65536 00:06:48.559 }, 00:06:48.559 { 00:06:48.559 "name": null, 00:06:48.559 "uuid": "908e153f-527e-4526-a06f-06c1db943497", 00:06:48.559 "is_configured": false, 00:06:48.559 "data_offset": 0, 00:06:48.559 "data_size": 65536 00:06:48.559 }, 00:06:48.559 { 00:06:48.559 "name": "BaseBdev3", 00:06:48.559 "uuid": "1d939061-9cdf-4692-ab94-97b84e50a8cc", 00:06:48.559 "is_configured": true, 00:06:48.559 "data_offset": 0, 00:06:48.559 "data_size": 65536 00:06:48.559 } 00:06:48.559 ] 00:06:48.559 }' 00:06:48.559 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.559 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.818 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.818 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.818 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.818 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.079 [2024-10-01 14:31:40.521636] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.079 "name": "Existed_Raid", 00:06:49.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.079 "strip_size_kb": 64, 00:06:49.079 "state": "configuring", 00:06:49.079 "raid_level": "raid0", 00:06:49.079 "superblock": false, 00:06:49.079 "num_base_bdevs": 3, 00:06:49.079 "num_base_bdevs_discovered": 1, 00:06:49.079 "num_base_bdevs_operational": 3, 00:06:49.079 "base_bdevs_list": [ 00:06:49.079 { 00:06:49.079 "name": null, 00:06:49.079 "uuid": "5763a12a-909a-418d-be4b-5d59d5d46f96", 00:06:49.079 "is_configured": false, 00:06:49.079 "data_offset": 0, 00:06:49.079 "data_size": 65536 00:06:49.079 }, 00:06:49.079 { 00:06:49.079 "name": null, 00:06:49.079 "uuid": "908e153f-527e-4526-a06f-06c1db943497", 00:06:49.079 "is_configured": false, 00:06:49.079 "data_offset": 0, 00:06:49.079 "data_size": 65536 00:06:49.079 }, 00:06:49.079 { 00:06:49.079 "name": "BaseBdev3", 00:06:49.079 "uuid": "1d939061-9cdf-4692-ab94-97b84e50a8cc", 00:06:49.079 "is_configured": true, 00:06:49.079 "data_offset": 0, 00:06:49.079 "data_size": 65536 00:06:49.079 } 00:06:49.079 ] 00:06:49.079 }' 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.079 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.339 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.339 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.339 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:06:49.339 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.339 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.339 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:06:49.339 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:06:49.339 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.339 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.339 [2024-10-01 14:31:40.942162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:49.339 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.339 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:49.339 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:49.339 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.340 "name": "Existed_Raid", 00:06:49.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.340 "strip_size_kb": 64, 00:06:49.340 "state": "configuring", 00:06:49.340 "raid_level": "raid0", 00:06:49.340 "superblock": false, 00:06:49.340 "num_base_bdevs": 3, 00:06:49.340 "num_base_bdevs_discovered": 2, 00:06:49.340 "num_base_bdevs_operational": 3, 00:06:49.340 "base_bdevs_list": [ 00:06:49.340 { 00:06:49.340 "name": null, 00:06:49.340 "uuid": "5763a12a-909a-418d-be4b-5d59d5d46f96", 00:06:49.340 "is_configured": false, 00:06:49.340 "data_offset": 0, 00:06:49.340 "data_size": 65536 00:06:49.340 }, 00:06:49.340 { 00:06:49.340 "name": "BaseBdev2", 00:06:49.340 "uuid": "908e153f-527e-4526-a06f-06c1db943497", 00:06:49.340 "is_configured": true, 00:06:49.340 "data_offset": 0, 00:06:49.340 "data_size": 65536 00:06:49.340 }, 00:06:49.340 { 00:06:49.340 "name": "BaseBdev3", 00:06:49.340 "uuid": "1d939061-9cdf-4692-ab94-97b84e50a8cc", 00:06:49.340 "is_configured": true, 00:06:49.340 "data_offset": 0, 00:06:49.340 "data_size": 65536 00:06:49.340 } 00:06:49.340 ] 00:06:49.340 }' 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.340 14:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.599 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:06:49.599 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.599 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.599 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5763a12a-909a-418d-be4b-5d59d5d46f96 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.859 [2024-10-01 14:31:41.375435] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:06:49.859 [2024-10-01 14:31:41.375488] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:06:49.859 [2024-10-01 14:31:41.375497] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:49.859 [2024-10-01 14:31:41.375799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:49.859 [2024-10-01 14:31:41.375952] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:06:49.859 [2024-10-01 14:31:41.375968] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:06:49.859 [2024-10-01 14:31:41.376239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.859 NewBaseBdev 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.859 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.859 [ 00:06:49.859 { 00:06:49.859 "name": "NewBaseBdev", 00:06:49.859 "aliases": [ 00:06:49.859 "5763a12a-909a-418d-be4b-5d59d5d46f96" 00:06:49.859 ], 00:06:49.859 "product_name": "Malloc disk", 00:06:49.859 "block_size": 512, 00:06:49.859 "num_blocks": 65536, 00:06:49.859 "uuid": "5763a12a-909a-418d-be4b-5d59d5d46f96", 00:06:49.859 "assigned_rate_limits": { 00:06:49.859 "rw_ios_per_sec": 0, 00:06:49.859 "rw_mbytes_per_sec": 0, 00:06:49.859 "r_mbytes_per_sec": 0, 00:06:49.859 "w_mbytes_per_sec": 0 00:06:49.859 }, 00:06:49.859 "claimed": true, 00:06:49.859 "claim_type": "exclusive_write", 00:06:49.859 "zoned": false, 00:06:49.859 "supported_io_types": { 00:06:49.859 "read": true, 00:06:49.859 "write": true, 00:06:49.859 "unmap": true, 00:06:49.859 "flush": true, 00:06:49.859 "reset": true, 00:06:49.859 "nvme_admin": false, 00:06:49.859 "nvme_io": false, 00:06:49.859 "nvme_io_md": false, 00:06:49.859 "write_zeroes": true, 00:06:49.859 "zcopy": true, 00:06:49.859 "get_zone_info": false, 00:06:49.859 "zone_management": false, 00:06:49.859 "zone_append": false, 00:06:49.859 "compare": false, 00:06:49.859 "compare_and_write": false, 00:06:49.859 "abort": true, 00:06:49.859 "seek_hole": false, 00:06:49.859 "seek_data": false, 00:06:49.859 "copy": true, 00:06:49.859 "nvme_iov_md": false 00:06:49.859 }, 00:06:49.859 "memory_domains": [ 00:06:49.859 { 00:06:49.859 "dma_device_id": "system", 00:06:49.860 "dma_device_type": 1 00:06:49.860 }, 00:06:49.860 { 00:06:49.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.860 "dma_device_type": 2 00:06:49.860 } 00:06:49.860 ], 00:06:49.860 "driver_specific": {} 00:06:49.860 } 00:06:49.860 ] 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.860 "name": "Existed_Raid", 00:06:49.860 "uuid": "b34b272c-7df8-4f75-b509-5ce38494a2f6", 00:06:49.860 "strip_size_kb": 64, 00:06:49.860 "state": "online", 00:06:49.860 "raid_level": "raid0", 00:06:49.860 "superblock": false, 00:06:49.860 "num_base_bdevs": 3, 00:06:49.860 "num_base_bdevs_discovered": 3, 00:06:49.860 "num_base_bdevs_operational": 3, 00:06:49.860 "base_bdevs_list": [ 00:06:49.860 { 00:06:49.860 "name": "NewBaseBdev", 00:06:49.860 "uuid": "5763a12a-909a-418d-be4b-5d59d5d46f96", 00:06:49.860 "is_configured": true, 00:06:49.860 "data_offset": 0, 00:06:49.860 "data_size": 65536 00:06:49.860 }, 00:06:49.860 { 00:06:49.860 "name": "BaseBdev2", 00:06:49.860 "uuid": "908e153f-527e-4526-a06f-06c1db943497", 00:06:49.860 "is_configured": true, 00:06:49.860 "data_offset": 0, 00:06:49.860 "data_size": 65536 00:06:49.860 }, 00:06:49.860 { 00:06:49.860 "name": "BaseBdev3", 00:06:49.860 "uuid": "1d939061-9cdf-4692-ab94-97b84e50a8cc", 00:06:49.860 "is_configured": true, 00:06:49.860 "data_offset": 0, 00:06:49.860 "data_size": 65536 00:06:49.860 } 00:06:49.860 ] 00:06:49.860 }' 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.860 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.119 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:06:50.119 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:50.119 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:50.119 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:50.119 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:50.119 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:50.119 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:50.119 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:50.119 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.119 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.119 [2024-10-01 14:31:41.751941] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:50.119 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.119 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:50.119 "name": "Existed_Raid", 00:06:50.119 "aliases": [ 00:06:50.119 "b34b272c-7df8-4f75-b509-5ce38494a2f6" 00:06:50.119 ], 00:06:50.119 "product_name": "Raid Volume", 00:06:50.119 "block_size": 512, 00:06:50.119 "num_blocks": 196608, 00:06:50.119 "uuid": "b34b272c-7df8-4f75-b509-5ce38494a2f6", 00:06:50.119 "assigned_rate_limits": { 00:06:50.119 "rw_ios_per_sec": 0, 00:06:50.119 "rw_mbytes_per_sec": 0, 00:06:50.119 "r_mbytes_per_sec": 0, 00:06:50.119 "w_mbytes_per_sec": 0 00:06:50.119 }, 00:06:50.119 "claimed": false, 00:06:50.119 "zoned": false, 00:06:50.119 "supported_io_types": { 00:06:50.119 "read": true, 00:06:50.119 "write": true, 00:06:50.119 "unmap": true, 00:06:50.119 "flush": true, 00:06:50.119 "reset": true, 00:06:50.119 "nvme_admin": false, 00:06:50.119 "nvme_io": false, 00:06:50.119 "nvme_io_md": false, 00:06:50.119 "write_zeroes": true, 00:06:50.119 "zcopy": false, 00:06:50.119 "get_zone_info": false, 00:06:50.119 "zone_management": false, 00:06:50.119 "zone_append": false, 00:06:50.119 "compare": false, 00:06:50.119 "compare_and_write": false, 00:06:50.119 "abort": false, 00:06:50.119 "seek_hole": false, 00:06:50.119 "seek_data": false, 00:06:50.119 "copy": false, 00:06:50.119 "nvme_iov_md": false 00:06:50.119 }, 00:06:50.119 "memory_domains": [ 00:06:50.119 { 00:06:50.119 "dma_device_id": "system", 00:06:50.119 "dma_device_type": 1 00:06:50.119 }, 00:06:50.119 { 00:06:50.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.119 "dma_device_type": 2 00:06:50.119 }, 00:06:50.119 { 00:06:50.119 "dma_device_id": "system", 00:06:50.119 "dma_device_type": 1 00:06:50.119 }, 00:06:50.119 { 00:06:50.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.119 "dma_device_type": 2 00:06:50.119 }, 00:06:50.119 { 00:06:50.119 "dma_device_id": "system", 00:06:50.119 "dma_device_type": 1 00:06:50.119 }, 00:06:50.119 { 00:06:50.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.119 "dma_device_type": 2 00:06:50.119 } 00:06:50.119 ], 00:06:50.119 "driver_specific": { 00:06:50.119 "raid": { 00:06:50.119 "uuid": "b34b272c-7df8-4f75-b509-5ce38494a2f6", 00:06:50.119 "strip_size_kb": 64, 00:06:50.119 "state": "online", 00:06:50.119 "raid_level": "raid0", 00:06:50.119 "superblock": false, 00:06:50.119 "num_base_bdevs": 3, 00:06:50.119 "num_base_bdevs_discovered": 3, 00:06:50.119 "num_base_bdevs_operational": 3, 00:06:50.119 "base_bdevs_list": [ 00:06:50.119 { 00:06:50.119 "name": "NewBaseBdev", 00:06:50.119 "uuid": "5763a12a-909a-418d-be4b-5d59d5d46f96", 00:06:50.119 "is_configured": true, 00:06:50.119 "data_offset": 0, 00:06:50.119 "data_size": 65536 00:06:50.119 }, 00:06:50.119 { 00:06:50.119 "name": "BaseBdev2", 00:06:50.119 "uuid": "908e153f-527e-4526-a06f-06c1db943497", 00:06:50.119 "is_configured": true, 00:06:50.119 "data_offset": 0, 00:06:50.119 "data_size": 65536 00:06:50.119 }, 00:06:50.119 { 00:06:50.119 "name": "BaseBdev3", 00:06:50.119 "uuid": "1d939061-9cdf-4692-ab94-97b84e50a8cc", 00:06:50.119 "is_configured": true, 00:06:50.119 "data_offset": 0, 00:06:50.119 "data_size": 65536 00:06:50.119 } 00:06:50.119 ] 00:06:50.119 } 00:06:50.119 } 00:06:50.119 }' 00:06:50.119 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:06:50.379 BaseBdev2 00:06:50.379 BaseBdev3' 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.379 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:50.380 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:50.380 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:50.380 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.380 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.380 [2024-10-01 14:31:41.983631] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:50.380 [2024-10-01 14:31:41.983663] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:50.380 [2024-10-01 14:31:41.983756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.380 [2024-10-01 14:31:41.983812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:50.380 [2024-10-01 14:31:41.983836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:06:50.380 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.380 14:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62558 00:06:50.380 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62558 ']' 00:06:50.380 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62558 00:06:50.380 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:06:50.380 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.380 14:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62558 00:06:50.380 killing process with pid 62558 00:06:50.380 14:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.380 14:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.380 14:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62558' 00:06:50.380 14:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62558 00:06:50.380 [2024-10-01 14:31:42.017228] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:50.380 14:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62558 00:06:50.640 [2024-10-01 14:31:42.209815] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.583 ************************************ 00:06:51.583 END TEST raid_state_function_test 00:06:51.583 ************************************ 00:06:51.583 14:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:51.583 00:06:51.583 real 0m8.065s 00:06:51.583 user 0m12.813s 00:06:51.583 sys 0m1.218s 00:06:51.583 14:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.583 14:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.583 14:31:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:06:51.583 14:31:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:51.583 14:31:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.583 14:31:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:51.583 ************************************ 00:06:51.583 START TEST raid_state_function_test_sb 00:06:51.583 ************************************ 00:06:51.583 14:31:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:06:51.583 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:51.583 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:06:51.583 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:51.583 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:51.584 Process raid pid: 63157 00:06:51.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63157 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63157' 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63157 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 63157 ']' 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.584 14:31:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:51.584 [2024-10-01 14:31:43.180508] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:51.584 [2024-10-01 14:31:43.180635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.846 [2024-10-01 14:31:43.333335] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.846 [2024-10-01 14:31:43.525035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.107 [2024-10-01 14:31:43.665099] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.107 [2024-10-01 14:31:43.665149] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.676 [2024-10-01 14:31:44.116855] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:52.676 [2024-10-01 14:31:44.116907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:52.676 [2024-10-01 14:31:44.116918] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:52.676 [2024-10-01 14:31:44.116927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:52.676 [2024-10-01 14:31:44.116934] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:52.676 [2024-10-01 14:31:44.116942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.676 "name": "Existed_Raid", 00:06:52.676 "uuid": "05b405dd-8138-4b7d-b79c-8a555b3c6b4e", 00:06:52.676 "strip_size_kb": 64, 00:06:52.676 "state": "configuring", 00:06:52.676 "raid_level": "raid0", 00:06:52.676 "superblock": true, 00:06:52.676 "num_base_bdevs": 3, 00:06:52.676 "num_base_bdevs_discovered": 0, 00:06:52.676 "num_base_bdevs_operational": 3, 00:06:52.676 "base_bdevs_list": [ 00:06:52.676 { 00:06:52.676 "name": "BaseBdev1", 00:06:52.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.676 "is_configured": false, 00:06:52.676 "data_offset": 0, 00:06:52.676 "data_size": 0 00:06:52.676 }, 00:06:52.676 { 00:06:52.676 "name": "BaseBdev2", 00:06:52.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.676 "is_configured": false, 00:06:52.676 "data_offset": 0, 00:06:52.676 "data_size": 0 00:06:52.676 }, 00:06:52.676 { 00:06:52.676 "name": "BaseBdev3", 00:06:52.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.676 "is_configured": false, 00:06:52.676 "data_offset": 0, 00:06:52.676 "data_size": 0 00:06:52.676 } 00:06:52.676 ] 00:06:52.676 }' 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.676 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.936 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:52.936 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.936 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.936 [2024-10-01 14:31:44.468828] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:52.936 [2024-10-01 14:31:44.468861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:52.936 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.936 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:52.936 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.936 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.936 [2024-10-01 14:31:44.476867] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:52.936 [2024-10-01 14:31:44.476909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:52.936 [2024-10-01 14:31:44.476917] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:52.936 [2024-10-01 14:31:44.476926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:52.936 [2024-10-01 14:31:44.476932] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:52.936 [2024-10-01 14:31:44.476940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:52.936 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.937 [2024-10-01 14:31:44.523995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:52.937 BaseBdev1 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.937 [ 00:06:52.937 { 00:06:52.937 "name": "BaseBdev1", 00:06:52.937 "aliases": [ 00:06:52.937 "1f4f99a3-bd9d-4ffb-80f4-1fef1b3871a1" 00:06:52.937 ], 00:06:52.937 "product_name": "Malloc disk", 00:06:52.937 "block_size": 512, 00:06:52.937 "num_blocks": 65536, 00:06:52.937 "uuid": "1f4f99a3-bd9d-4ffb-80f4-1fef1b3871a1", 00:06:52.937 "assigned_rate_limits": { 00:06:52.937 "rw_ios_per_sec": 0, 00:06:52.937 "rw_mbytes_per_sec": 0, 00:06:52.937 "r_mbytes_per_sec": 0, 00:06:52.937 "w_mbytes_per_sec": 0 00:06:52.937 }, 00:06:52.937 "claimed": true, 00:06:52.937 "claim_type": "exclusive_write", 00:06:52.937 "zoned": false, 00:06:52.937 "supported_io_types": { 00:06:52.937 "read": true, 00:06:52.937 "write": true, 00:06:52.937 "unmap": true, 00:06:52.937 "flush": true, 00:06:52.937 "reset": true, 00:06:52.937 "nvme_admin": false, 00:06:52.937 "nvme_io": false, 00:06:52.937 "nvme_io_md": false, 00:06:52.937 "write_zeroes": true, 00:06:52.937 "zcopy": true, 00:06:52.937 "get_zone_info": false, 00:06:52.937 "zone_management": false, 00:06:52.937 "zone_append": false, 00:06:52.937 "compare": false, 00:06:52.937 "compare_and_write": false, 00:06:52.937 "abort": true, 00:06:52.937 "seek_hole": false, 00:06:52.937 "seek_data": false, 00:06:52.937 "copy": true, 00:06:52.937 "nvme_iov_md": false 00:06:52.937 }, 00:06:52.937 "memory_domains": [ 00:06:52.937 { 00:06:52.937 "dma_device_id": "system", 00:06:52.937 "dma_device_type": 1 00:06:52.937 }, 00:06:52.937 { 00:06:52.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.937 "dma_device_type": 2 00:06:52.937 } 00:06:52.937 ], 00:06:52.937 "driver_specific": {} 00:06:52.937 } 00:06:52.937 ] 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.937 "name": "Existed_Raid", 00:06:52.937 "uuid": "2cf6267c-bf87-4594-941a-3e1154e9eb28", 00:06:52.937 "strip_size_kb": 64, 00:06:52.937 "state": "configuring", 00:06:52.937 "raid_level": "raid0", 00:06:52.937 "superblock": true, 00:06:52.937 "num_base_bdevs": 3, 00:06:52.937 "num_base_bdevs_discovered": 1, 00:06:52.937 "num_base_bdevs_operational": 3, 00:06:52.937 "base_bdevs_list": [ 00:06:52.937 { 00:06:52.937 "name": "BaseBdev1", 00:06:52.937 "uuid": "1f4f99a3-bd9d-4ffb-80f4-1fef1b3871a1", 00:06:52.937 "is_configured": true, 00:06:52.937 "data_offset": 2048, 00:06:52.937 "data_size": 63488 00:06:52.937 }, 00:06:52.937 { 00:06:52.937 "name": "BaseBdev2", 00:06:52.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.937 "is_configured": false, 00:06:52.937 "data_offset": 0, 00:06:52.937 "data_size": 0 00:06:52.937 }, 00:06:52.937 { 00:06:52.937 "name": "BaseBdev3", 00:06:52.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.937 "is_configured": false, 00:06:52.937 "data_offset": 0, 00:06:52.937 "data_size": 0 00:06:52.937 } 00:06:52.937 ] 00:06:52.937 }' 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.937 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.198 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:53.198 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.198 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.198 [2024-10-01 14:31:44.880124] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:53.198 [2024-10-01 14:31:44.880315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.461 [2024-10-01 14:31:44.892195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:53.461 [2024-10-01 14:31:44.894298] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:53.461 [2024-10-01 14:31:44.894464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:53.461 [2024-10-01 14:31:44.894481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:06:53.461 [2024-10-01 14:31:44.894492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:53.461 "name": "Existed_Raid", 00:06:53.461 "uuid": "f7e7105d-434d-42eb-9926-578fc41ef1f1", 00:06:53.461 "strip_size_kb": 64, 00:06:53.461 "state": "configuring", 00:06:53.461 "raid_level": "raid0", 00:06:53.461 "superblock": true, 00:06:53.461 "num_base_bdevs": 3, 00:06:53.461 "num_base_bdevs_discovered": 1, 00:06:53.461 "num_base_bdevs_operational": 3, 00:06:53.461 "base_bdevs_list": [ 00:06:53.461 { 00:06:53.461 "name": "BaseBdev1", 00:06:53.461 "uuid": "1f4f99a3-bd9d-4ffb-80f4-1fef1b3871a1", 00:06:53.461 "is_configured": true, 00:06:53.461 "data_offset": 2048, 00:06:53.461 "data_size": 63488 00:06:53.461 }, 00:06:53.461 { 00:06:53.461 "name": "BaseBdev2", 00:06:53.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:53.461 "is_configured": false, 00:06:53.461 "data_offset": 0, 00:06:53.461 "data_size": 0 00:06:53.461 }, 00:06:53.461 { 00:06:53.461 "name": "BaseBdev3", 00:06:53.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:53.461 "is_configured": false, 00:06:53.461 "data_offset": 0, 00:06:53.461 "data_size": 0 00:06:53.461 } 00:06:53.461 ] 00:06:53.461 }' 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:53.461 14:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.722 BaseBdev2 00:06:53.722 [2024-10-01 14:31:45.255486] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.722 [ 00:06:53.722 { 00:06:53.722 "name": "BaseBdev2", 00:06:53.722 "aliases": [ 00:06:53.722 "65e37052-c87b-4331-bc35-724ba72f9714" 00:06:53.722 ], 00:06:53.722 "product_name": "Malloc disk", 00:06:53.722 "block_size": 512, 00:06:53.722 "num_blocks": 65536, 00:06:53.722 "uuid": "65e37052-c87b-4331-bc35-724ba72f9714", 00:06:53.722 "assigned_rate_limits": { 00:06:53.722 "rw_ios_per_sec": 0, 00:06:53.722 "rw_mbytes_per_sec": 0, 00:06:53.722 "r_mbytes_per_sec": 0, 00:06:53.722 "w_mbytes_per_sec": 0 00:06:53.722 }, 00:06:53.722 "claimed": true, 00:06:53.722 "claim_type": "exclusive_write", 00:06:53.722 "zoned": false, 00:06:53.722 "supported_io_types": { 00:06:53.722 "read": true, 00:06:53.722 "write": true, 00:06:53.722 "unmap": true, 00:06:53.722 "flush": true, 00:06:53.722 "reset": true, 00:06:53.722 "nvme_admin": false, 00:06:53.722 "nvme_io": false, 00:06:53.722 "nvme_io_md": false, 00:06:53.722 "write_zeroes": true, 00:06:53.722 "zcopy": true, 00:06:53.722 "get_zone_info": false, 00:06:53.722 "zone_management": false, 00:06:53.722 "zone_append": false, 00:06:53.722 "compare": false, 00:06:53.722 "compare_and_write": false, 00:06:53.722 "abort": true, 00:06:53.722 "seek_hole": false, 00:06:53.722 "seek_data": false, 00:06:53.722 "copy": true, 00:06:53.722 "nvme_iov_md": false 00:06:53.722 }, 00:06:53.722 "memory_domains": [ 00:06:53.722 { 00:06:53.722 "dma_device_id": "system", 00:06:53.722 "dma_device_type": 1 00:06:53.722 }, 00:06:53.722 { 00:06:53.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.722 "dma_device_type": 2 00:06:53.722 } 00:06:53.722 ], 00:06:53.722 "driver_specific": {} 00:06:53.722 } 00:06:53.722 ] 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:53.722 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:53.723 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:53.723 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:53.723 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:53.723 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:53.723 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.723 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:53.723 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.723 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.723 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.723 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:53.723 "name": "Existed_Raid", 00:06:53.723 "uuid": "f7e7105d-434d-42eb-9926-578fc41ef1f1", 00:06:53.723 "strip_size_kb": 64, 00:06:53.723 "state": "configuring", 00:06:53.723 "raid_level": "raid0", 00:06:53.723 "superblock": true, 00:06:53.723 "num_base_bdevs": 3, 00:06:53.723 "num_base_bdevs_discovered": 2, 00:06:53.723 "num_base_bdevs_operational": 3, 00:06:53.723 "base_bdevs_list": [ 00:06:53.723 { 00:06:53.723 "name": "BaseBdev1", 00:06:53.723 "uuid": "1f4f99a3-bd9d-4ffb-80f4-1fef1b3871a1", 00:06:53.723 "is_configured": true, 00:06:53.723 "data_offset": 2048, 00:06:53.723 "data_size": 63488 00:06:53.723 }, 00:06:53.723 { 00:06:53.723 "name": "BaseBdev2", 00:06:53.723 "uuid": "65e37052-c87b-4331-bc35-724ba72f9714", 00:06:53.723 "is_configured": true, 00:06:53.723 "data_offset": 2048, 00:06:53.723 "data_size": 63488 00:06:53.723 }, 00:06:53.723 { 00:06:53.723 "name": "BaseBdev3", 00:06:53.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:53.723 "is_configured": false, 00:06:53.723 "data_offset": 0, 00:06:53.723 "data_size": 0 00:06:53.723 } 00:06:53.723 ] 00:06:53.723 }' 00:06:53.723 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:53.723 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.027 [2024-10-01 14:31:45.650688] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:54.027 [2024-10-01 14:31:45.650938] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:54.027 [2024-10-01 14:31:45.650956] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:06:54.027 [2024-10-01 14:31:45.651207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:54.027 [2024-10-01 14:31:45.651341] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:54.027 [2024-10-01 14:31:45.651350] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:54.027 [2024-10-01 14:31:45.651479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.027 BaseBdev3 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.027 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.027 [ 00:06:54.027 { 00:06:54.027 "name": "BaseBdev3", 00:06:54.027 "aliases": [ 00:06:54.027 "261c1b48-4dce-4e39-bb0f-33bf65e9662b" 00:06:54.027 ], 00:06:54.027 "product_name": "Malloc disk", 00:06:54.027 "block_size": 512, 00:06:54.027 "num_blocks": 65536, 00:06:54.027 "uuid": "261c1b48-4dce-4e39-bb0f-33bf65e9662b", 00:06:54.343 "assigned_rate_limits": { 00:06:54.343 "rw_ios_per_sec": 0, 00:06:54.343 "rw_mbytes_per_sec": 0, 00:06:54.343 "r_mbytes_per_sec": 0, 00:06:54.343 "w_mbytes_per_sec": 0 00:06:54.343 }, 00:06:54.343 "claimed": true, 00:06:54.343 "claim_type": "exclusive_write", 00:06:54.343 "zoned": false, 00:06:54.343 "supported_io_types": { 00:06:54.343 "read": true, 00:06:54.343 "write": true, 00:06:54.343 "unmap": true, 00:06:54.343 "flush": true, 00:06:54.343 "reset": true, 00:06:54.343 "nvme_admin": false, 00:06:54.343 "nvme_io": false, 00:06:54.343 "nvme_io_md": false, 00:06:54.343 "write_zeroes": true, 00:06:54.343 "zcopy": true, 00:06:54.343 "get_zone_info": false, 00:06:54.343 "zone_management": false, 00:06:54.343 "zone_append": false, 00:06:54.343 "compare": false, 00:06:54.343 "compare_and_write": false, 00:06:54.343 "abort": true, 00:06:54.343 "seek_hole": false, 00:06:54.343 "seek_data": false, 00:06:54.343 "copy": true, 00:06:54.343 "nvme_iov_md": false 00:06:54.343 }, 00:06:54.343 "memory_domains": [ 00:06:54.343 { 00:06:54.343 "dma_device_id": "system", 00:06:54.343 "dma_device_type": 1 00:06:54.343 }, 00:06:54.343 { 00:06:54.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.343 "dma_device_type": 2 00:06:54.343 } 00:06:54.343 ], 00:06:54.343 "driver_specific": {} 00:06:54.343 } 00:06:54.343 ] 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.343 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.343 "name": "Existed_Raid", 00:06:54.343 "uuid": "f7e7105d-434d-42eb-9926-578fc41ef1f1", 00:06:54.343 "strip_size_kb": 64, 00:06:54.343 "state": "online", 00:06:54.343 "raid_level": "raid0", 00:06:54.343 "superblock": true, 00:06:54.343 "num_base_bdevs": 3, 00:06:54.343 "num_base_bdevs_discovered": 3, 00:06:54.343 "num_base_bdevs_operational": 3, 00:06:54.343 "base_bdevs_list": [ 00:06:54.343 { 00:06:54.343 "name": "BaseBdev1", 00:06:54.343 "uuid": "1f4f99a3-bd9d-4ffb-80f4-1fef1b3871a1", 00:06:54.343 "is_configured": true, 00:06:54.343 "data_offset": 2048, 00:06:54.343 "data_size": 63488 00:06:54.343 }, 00:06:54.343 { 00:06:54.343 "name": "BaseBdev2", 00:06:54.343 "uuid": "65e37052-c87b-4331-bc35-724ba72f9714", 00:06:54.344 "is_configured": true, 00:06:54.344 "data_offset": 2048, 00:06:54.344 "data_size": 63488 00:06:54.344 }, 00:06:54.344 { 00:06:54.344 "name": "BaseBdev3", 00:06:54.344 "uuid": "261c1b48-4dce-4e39-bb0f-33bf65e9662b", 00:06:54.344 "is_configured": true, 00:06:54.344 "data_offset": 2048, 00:06:54.344 "data_size": 63488 00:06:54.344 } 00:06:54.344 ] 00:06:54.344 }' 00:06:54.344 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.344 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.344 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:54.344 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:54.344 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:54.344 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:54.344 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:54.344 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:54.344 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:54.344 14:31:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:54.344 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.344 14:31:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.344 [2024-10-01 14:31:46.003173] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.344 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.606 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:54.606 "name": "Existed_Raid", 00:06:54.606 "aliases": [ 00:06:54.606 "f7e7105d-434d-42eb-9926-578fc41ef1f1" 00:06:54.606 ], 00:06:54.606 "product_name": "Raid Volume", 00:06:54.606 "block_size": 512, 00:06:54.606 "num_blocks": 190464, 00:06:54.606 "uuid": "f7e7105d-434d-42eb-9926-578fc41ef1f1", 00:06:54.606 "assigned_rate_limits": { 00:06:54.606 "rw_ios_per_sec": 0, 00:06:54.606 "rw_mbytes_per_sec": 0, 00:06:54.606 "r_mbytes_per_sec": 0, 00:06:54.606 "w_mbytes_per_sec": 0 00:06:54.606 }, 00:06:54.606 "claimed": false, 00:06:54.606 "zoned": false, 00:06:54.606 "supported_io_types": { 00:06:54.606 "read": true, 00:06:54.606 "write": true, 00:06:54.606 "unmap": true, 00:06:54.606 "flush": true, 00:06:54.606 "reset": true, 00:06:54.606 "nvme_admin": false, 00:06:54.606 "nvme_io": false, 00:06:54.606 "nvme_io_md": false, 00:06:54.606 "write_zeroes": true, 00:06:54.606 "zcopy": false, 00:06:54.606 "get_zone_info": false, 00:06:54.606 "zone_management": false, 00:06:54.606 "zone_append": false, 00:06:54.606 "compare": false, 00:06:54.606 "compare_and_write": false, 00:06:54.606 "abort": false, 00:06:54.606 "seek_hole": false, 00:06:54.606 "seek_data": false, 00:06:54.606 "copy": false, 00:06:54.606 "nvme_iov_md": false 00:06:54.606 }, 00:06:54.606 "memory_domains": [ 00:06:54.606 { 00:06:54.606 "dma_device_id": "system", 00:06:54.606 "dma_device_type": 1 00:06:54.606 }, 00:06:54.606 { 00:06:54.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.606 "dma_device_type": 2 00:06:54.606 }, 00:06:54.606 { 00:06:54.606 "dma_device_id": "system", 00:06:54.606 "dma_device_type": 1 00:06:54.606 }, 00:06:54.606 { 00:06:54.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.606 "dma_device_type": 2 00:06:54.606 }, 00:06:54.606 { 00:06:54.606 "dma_device_id": "system", 00:06:54.606 "dma_device_type": 1 00:06:54.606 }, 00:06:54.606 { 00:06:54.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.606 "dma_device_type": 2 00:06:54.606 } 00:06:54.606 ], 00:06:54.606 "driver_specific": { 00:06:54.606 "raid": { 00:06:54.606 "uuid": "f7e7105d-434d-42eb-9926-578fc41ef1f1", 00:06:54.606 "strip_size_kb": 64, 00:06:54.606 "state": "online", 00:06:54.606 "raid_level": "raid0", 00:06:54.606 "superblock": true, 00:06:54.606 "num_base_bdevs": 3, 00:06:54.606 "num_base_bdevs_discovered": 3, 00:06:54.606 "num_base_bdevs_operational": 3, 00:06:54.606 "base_bdevs_list": [ 00:06:54.606 { 00:06:54.606 "name": "BaseBdev1", 00:06:54.606 "uuid": "1f4f99a3-bd9d-4ffb-80f4-1fef1b3871a1", 00:06:54.606 "is_configured": true, 00:06:54.606 "data_offset": 2048, 00:06:54.606 "data_size": 63488 00:06:54.606 }, 00:06:54.606 { 00:06:54.606 "name": "BaseBdev2", 00:06:54.606 "uuid": "65e37052-c87b-4331-bc35-724ba72f9714", 00:06:54.606 "is_configured": true, 00:06:54.606 "data_offset": 2048, 00:06:54.606 "data_size": 63488 00:06:54.606 }, 00:06:54.606 { 00:06:54.606 "name": "BaseBdev3", 00:06:54.606 "uuid": "261c1b48-4dce-4e39-bb0f-33bf65e9662b", 00:06:54.606 "is_configured": true, 00:06:54.606 "data_offset": 2048, 00:06:54.606 "data_size": 63488 00:06:54.606 } 00:06:54.606 ] 00:06:54.606 } 00:06:54.606 } 00:06:54.606 }' 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:54.607 BaseBdev2 00:06:54.607 BaseBdev3' 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.607 [2024-10-01 14:31:46.194908] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:54.607 [2024-10-01 14:31:46.194940] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:54.607 [2024-10-01 14:31:46.194994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.607 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.869 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.869 "name": "Existed_Raid", 00:06:54.869 "uuid": "f7e7105d-434d-42eb-9926-578fc41ef1f1", 00:06:54.869 "strip_size_kb": 64, 00:06:54.869 "state": "offline", 00:06:54.869 "raid_level": "raid0", 00:06:54.869 "superblock": true, 00:06:54.869 "num_base_bdevs": 3, 00:06:54.869 "num_base_bdevs_discovered": 2, 00:06:54.869 "num_base_bdevs_operational": 2, 00:06:54.869 "base_bdevs_list": [ 00:06:54.869 { 00:06:54.869 "name": null, 00:06:54.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.869 "is_configured": false, 00:06:54.869 "data_offset": 0, 00:06:54.869 "data_size": 63488 00:06:54.869 }, 00:06:54.869 { 00:06:54.869 "name": "BaseBdev2", 00:06:54.869 "uuid": "65e37052-c87b-4331-bc35-724ba72f9714", 00:06:54.869 "is_configured": true, 00:06:54.869 "data_offset": 2048, 00:06:54.869 "data_size": 63488 00:06:54.869 }, 00:06:54.869 { 00:06:54.869 "name": "BaseBdev3", 00:06:54.869 "uuid": "261c1b48-4dce-4e39-bb0f-33bf65e9662b", 00:06:54.869 "is_configured": true, 00:06:54.869 "data_offset": 2048, 00:06:54.869 "data_size": 63488 00:06:54.869 } 00:06:54.869 ] 00:06:54.869 }' 00:06:54.869 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.869 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.128 [2024-10-01 14:31:46.631253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.128 [2024-10-01 14:31:46.729738] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:06:55.128 [2024-10-01 14:31:46.729786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.128 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.390 BaseBdev2 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.390 [ 00:06:55.390 { 00:06:55.390 "name": "BaseBdev2", 00:06:55.390 "aliases": [ 00:06:55.390 "47608525-7018-488a-9a0c-7845e9e1a3ef" 00:06:55.390 ], 00:06:55.390 "product_name": "Malloc disk", 00:06:55.390 "block_size": 512, 00:06:55.390 "num_blocks": 65536, 00:06:55.390 "uuid": "47608525-7018-488a-9a0c-7845e9e1a3ef", 00:06:55.390 "assigned_rate_limits": { 00:06:55.390 "rw_ios_per_sec": 0, 00:06:55.390 "rw_mbytes_per_sec": 0, 00:06:55.390 "r_mbytes_per_sec": 0, 00:06:55.390 "w_mbytes_per_sec": 0 00:06:55.390 }, 00:06:55.390 "claimed": false, 00:06:55.390 "zoned": false, 00:06:55.390 "supported_io_types": { 00:06:55.390 "read": true, 00:06:55.390 "write": true, 00:06:55.390 "unmap": true, 00:06:55.390 "flush": true, 00:06:55.390 "reset": true, 00:06:55.390 "nvme_admin": false, 00:06:55.390 "nvme_io": false, 00:06:55.390 "nvme_io_md": false, 00:06:55.390 "write_zeroes": true, 00:06:55.390 "zcopy": true, 00:06:55.390 "get_zone_info": false, 00:06:55.390 "zone_management": false, 00:06:55.390 "zone_append": false, 00:06:55.390 "compare": false, 00:06:55.390 "compare_and_write": false, 00:06:55.390 "abort": true, 00:06:55.390 "seek_hole": false, 00:06:55.390 "seek_data": false, 00:06:55.390 "copy": true, 00:06:55.390 "nvme_iov_md": false 00:06:55.390 }, 00:06:55.390 "memory_domains": [ 00:06:55.390 { 00:06:55.390 "dma_device_id": "system", 00:06:55.390 "dma_device_type": 1 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.390 "dma_device_type": 2 00:06:55.390 } 00:06:55.390 ], 00:06:55.390 "driver_specific": {} 00:06:55.390 } 00:06:55.390 ] 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.390 BaseBdev3 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.390 [ 00:06:55.390 { 00:06:55.390 "name": "BaseBdev3", 00:06:55.390 "aliases": [ 00:06:55.390 "9c1429f2-2d2d-4e4e-ab79-66205584c266" 00:06:55.390 ], 00:06:55.390 "product_name": "Malloc disk", 00:06:55.390 "block_size": 512, 00:06:55.390 "num_blocks": 65536, 00:06:55.390 "uuid": "9c1429f2-2d2d-4e4e-ab79-66205584c266", 00:06:55.390 "assigned_rate_limits": { 00:06:55.390 "rw_ios_per_sec": 0, 00:06:55.390 "rw_mbytes_per_sec": 0, 00:06:55.390 "r_mbytes_per_sec": 0, 00:06:55.390 "w_mbytes_per_sec": 0 00:06:55.390 }, 00:06:55.390 "claimed": false, 00:06:55.390 "zoned": false, 00:06:55.390 "supported_io_types": { 00:06:55.390 "read": true, 00:06:55.390 "write": true, 00:06:55.390 "unmap": true, 00:06:55.390 "flush": true, 00:06:55.390 "reset": true, 00:06:55.390 "nvme_admin": false, 00:06:55.390 "nvme_io": false, 00:06:55.390 "nvme_io_md": false, 00:06:55.390 "write_zeroes": true, 00:06:55.390 "zcopy": true, 00:06:55.390 "get_zone_info": false, 00:06:55.390 "zone_management": false, 00:06:55.390 "zone_append": false, 00:06:55.390 "compare": false, 00:06:55.390 "compare_and_write": false, 00:06:55.390 "abort": true, 00:06:55.390 "seek_hole": false, 00:06:55.390 "seek_data": false, 00:06:55.390 "copy": true, 00:06:55.390 "nvme_iov_md": false 00:06:55.390 }, 00:06:55.390 "memory_domains": [ 00:06:55.390 { 00:06:55.390 "dma_device_id": "system", 00:06:55.390 "dma_device_type": 1 00:06:55.390 }, 00:06:55.390 { 00:06:55.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.390 "dma_device_type": 2 00:06:55.390 } 00:06:55.390 ], 00:06:55.390 "driver_specific": {} 00:06:55.390 } 00:06:55.390 ] 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.390 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.390 [2024-10-01 14:31:46.955158] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:55.390 [2024-10-01 14:31:46.955339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:55.390 [2024-10-01 14:31:46.955413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:55.390 [2024-10-01 14:31:46.957305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.391 14:31:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.391 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.391 "name": "Existed_Raid", 00:06:55.391 "uuid": "db629cbe-8efc-4e95-915d-f393ca7e2ab2", 00:06:55.391 "strip_size_kb": 64, 00:06:55.391 "state": "configuring", 00:06:55.391 "raid_level": "raid0", 00:06:55.391 "superblock": true, 00:06:55.391 "num_base_bdevs": 3, 00:06:55.391 "num_base_bdevs_discovered": 2, 00:06:55.391 "num_base_bdevs_operational": 3, 00:06:55.391 "base_bdevs_list": [ 00:06:55.391 { 00:06:55.391 "name": "BaseBdev1", 00:06:55.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.391 "is_configured": false, 00:06:55.391 "data_offset": 0, 00:06:55.391 "data_size": 0 00:06:55.391 }, 00:06:55.391 { 00:06:55.391 "name": "BaseBdev2", 00:06:55.391 "uuid": "47608525-7018-488a-9a0c-7845e9e1a3ef", 00:06:55.391 "is_configured": true, 00:06:55.391 "data_offset": 2048, 00:06:55.391 "data_size": 63488 00:06:55.391 }, 00:06:55.391 { 00:06:55.391 "name": "BaseBdev3", 00:06:55.391 "uuid": "9c1429f2-2d2d-4e4e-ab79-66205584c266", 00:06:55.391 "is_configured": true, 00:06:55.391 "data_offset": 2048, 00:06:55.391 "data_size": 63488 00:06:55.391 } 00:06:55.391 ] 00:06:55.391 }' 00:06:55.391 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.391 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.651 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:06:55.651 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.651 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.911 [2024-10-01 14:31:47.335205] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.911 "name": "Existed_Raid", 00:06:55.911 "uuid": "db629cbe-8efc-4e95-915d-f393ca7e2ab2", 00:06:55.911 "strip_size_kb": 64, 00:06:55.911 "state": "configuring", 00:06:55.911 "raid_level": "raid0", 00:06:55.911 "superblock": true, 00:06:55.911 "num_base_bdevs": 3, 00:06:55.911 "num_base_bdevs_discovered": 1, 00:06:55.911 "num_base_bdevs_operational": 3, 00:06:55.911 "base_bdevs_list": [ 00:06:55.911 { 00:06:55.911 "name": "BaseBdev1", 00:06:55.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.911 "is_configured": false, 00:06:55.911 "data_offset": 0, 00:06:55.911 "data_size": 0 00:06:55.911 }, 00:06:55.911 { 00:06:55.911 "name": null, 00:06:55.911 "uuid": "47608525-7018-488a-9a0c-7845e9e1a3ef", 00:06:55.911 "is_configured": false, 00:06:55.911 "data_offset": 0, 00:06:55.911 "data_size": 63488 00:06:55.911 }, 00:06:55.911 { 00:06:55.911 "name": "BaseBdev3", 00:06:55.911 "uuid": "9c1429f2-2d2d-4e4e-ab79-66205584c266", 00:06:55.911 "is_configured": true, 00:06:55.911 "data_offset": 2048, 00:06:55.911 "data_size": 63488 00:06:55.911 } 00:06:55.911 ] 00:06:55.911 }' 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.911 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.172 [2024-10-01 14:31:47.742199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:56.172 BaseBdev1 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.172 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.172 [ 00:06:56.172 { 00:06:56.172 "name": "BaseBdev1", 00:06:56.172 "aliases": [ 00:06:56.172 "7e3e35ae-3805-4b34-83b2-4f08bae2d4fc" 00:06:56.172 ], 00:06:56.172 "product_name": "Malloc disk", 00:06:56.172 "block_size": 512, 00:06:56.172 "num_blocks": 65536, 00:06:56.172 "uuid": "7e3e35ae-3805-4b34-83b2-4f08bae2d4fc", 00:06:56.172 "assigned_rate_limits": { 00:06:56.172 "rw_ios_per_sec": 0, 00:06:56.172 "rw_mbytes_per_sec": 0, 00:06:56.172 "r_mbytes_per_sec": 0, 00:06:56.172 "w_mbytes_per_sec": 0 00:06:56.172 }, 00:06:56.172 "claimed": true, 00:06:56.172 "claim_type": "exclusive_write", 00:06:56.172 "zoned": false, 00:06:56.172 "supported_io_types": { 00:06:56.172 "read": true, 00:06:56.172 "write": true, 00:06:56.172 "unmap": true, 00:06:56.172 "flush": true, 00:06:56.172 "reset": true, 00:06:56.172 "nvme_admin": false, 00:06:56.172 "nvme_io": false, 00:06:56.172 "nvme_io_md": false, 00:06:56.172 "write_zeroes": true, 00:06:56.172 "zcopy": true, 00:06:56.172 "get_zone_info": false, 00:06:56.172 "zone_management": false, 00:06:56.172 "zone_append": false, 00:06:56.172 "compare": false, 00:06:56.172 "compare_and_write": false, 00:06:56.172 "abort": true, 00:06:56.173 "seek_hole": false, 00:06:56.173 "seek_data": false, 00:06:56.173 "copy": true, 00:06:56.173 "nvme_iov_md": false 00:06:56.173 }, 00:06:56.173 "memory_domains": [ 00:06:56.173 { 00:06:56.173 "dma_device_id": "system", 00:06:56.173 "dma_device_type": 1 00:06:56.173 }, 00:06:56.173 { 00:06:56.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.173 "dma_device_type": 2 00:06:56.173 } 00:06:56.173 ], 00:06:56.173 "driver_specific": {} 00:06:56.173 } 00:06:56.173 ] 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.173 "name": "Existed_Raid", 00:06:56.173 "uuid": "db629cbe-8efc-4e95-915d-f393ca7e2ab2", 00:06:56.173 "strip_size_kb": 64, 00:06:56.173 "state": "configuring", 00:06:56.173 "raid_level": "raid0", 00:06:56.173 "superblock": true, 00:06:56.173 "num_base_bdevs": 3, 00:06:56.173 "num_base_bdevs_discovered": 2, 00:06:56.173 "num_base_bdevs_operational": 3, 00:06:56.173 "base_bdevs_list": [ 00:06:56.173 { 00:06:56.173 "name": "BaseBdev1", 00:06:56.173 "uuid": "7e3e35ae-3805-4b34-83b2-4f08bae2d4fc", 00:06:56.173 "is_configured": true, 00:06:56.173 "data_offset": 2048, 00:06:56.173 "data_size": 63488 00:06:56.173 }, 00:06:56.173 { 00:06:56.173 "name": null, 00:06:56.173 "uuid": "47608525-7018-488a-9a0c-7845e9e1a3ef", 00:06:56.173 "is_configured": false, 00:06:56.173 "data_offset": 0, 00:06:56.173 "data_size": 63488 00:06:56.173 }, 00:06:56.173 { 00:06:56.173 "name": "BaseBdev3", 00:06:56.173 "uuid": "9c1429f2-2d2d-4e4e-ab79-66205584c266", 00:06:56.173 "is_configured": true, 00:06:56.173 "data_offset": 2048, 00:06:56.173 "data_size": 63488 00:06:56.173 } 00:06:56.173 ] 00:06:56.173 }' 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.173 14:31:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.434 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.434 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.434 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:06:56.434 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.434 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.694 [2024-10-01 14:31:48.126375] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.694 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.694 "name": "Existed_Raid", 00:06:56.694 "uuid": "db629cbe-8efc-4e95-915d-f393ca7e2ab2", 00:06:56.694 "strip_size_kb": 64, 00:06:56.694 "state": "configuring", 00:06:56.694 "raid_level": "raid0", 00:06:56.694 "superblock": true, 00:06:56.694 "num_base_bdevs": 3, 00:06:56.694 "num_base_bdevs_discovered": 1, 00:06:56.694 "num_base_bdevs_operational": 3, 00:06:56.694 "base_bdevs_list": [ 00:06:56.694 { 00:06:56.694 "name": "BaseBdev1", 00:06:56.694 "uuid": "7e3e35ae-3805-4b34-83b2-4f08bae2d4fc", 00:06:56.694 "is_configured": true, 00:06:56.694 "data_offset": 2048, 00:06:56.694 "data_size": 63488 00:06:56.694 }, 00:06:56.694 { 00:06:56.694 "name": null, 00:06:56.694 "uuid": "47608525-7018-488a-9a0c-7845e9e1a3ef", 00:06:56.694 "is_configured": false, 00:06:56.694 "data_offset": 0, 00:06:56.694 "data_size": 63488 00:06:56.694 }, 00:06:56.695 { 00:06:56.695 "name": null, 00:06:56.695 "uuid": "9c1429f2-2d2d-4e4e-ab79-66205584c266", 00:06:56.695 "is_configured": false, 00:06:56.695 "data_offset": 0, 00:06:56.695 "data_size": 63488 00:06:56.695 } 00:06:56.695 ] 00:06:56.695 }' 00:06:56.695 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.695 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.010 [2024-10-01 14:31:48.498457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.010 "name": "Existed_Raid", 00:06:57.010 "uuid": "db629cbe-8efc-4e95-915d-f393ca7e2ab2", 00:06:57.010 "strip_size_kb": 64, 00:06:57.010 "state": "configuring", 00:06:57.010 "raid_level": "raid0", 00:06:57.010 "superblock": true, 00:06:57.010 "num_base_bdevs": 3, 00:06:57.010 "num_base_bdevs_discovered": 2, 00:06:57.010 "num_base_bdevs_operational": 3, 00:06:57.010 "base_bdevs_list": [ 00:06:57.010 { 00:06:57.010 "name": "BaseBdev1", 00:06:57.010 "uuid": "7e3e35ae-3805-4b34-83b2-4f08bae2d4fc", 00:06:57.010 "is_configured": true, 00:06:57.010 "data_offset": 2048, 00:06:57.010 "data_size": 63488 00:06:57.010 }, 00:06:57.010 { 00:06:57.010 "name": null, 00:06:57.010 "uuid": "47608525-7018-488a-9a0c-7845e9e1a3ef", 00:06:57.010 "is_configured": false, 00:06:57.010 "data_offset": 0, 00:06:57.010 "data_size": 63488 00:06:57.010 }, 00:06:57.010 { 00:06:57.010 "name": "BaseBdev3", 00:06:57.010 "uuid": "9c1429f2-2d2d-4e4e-ab79-66205584c266", 00:06:57.010 "is_configured": true, 00:06:57.010 "data_offset": 2048, 00:06:57.010 "data_size": 63488 00:06:57.010 } 00:06:57.010 ] 00:06:57.010 }' 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.010 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.286 [2024-10-01 14:31:48.846578] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.286 "name": "Existed_Raid", 00:06:57.286 "uuid": "db629cbe-8efc-4e95-915d-f393ca7e2ab2", 00:06:57.286 "strip_size_kb": 64, 00:06:57.286 "state": "configuring", 00:06:57.286 "raid_level": "raid0", 00:06:57.286 "superblock": true, 00:06:57.286 "num_base_bdevs": 3, 00:06:57.286 "num_base_bdevs_discovered": 1, 00:06:57.286 "num_base_bdevs_operational": 3, 00:06:57.286 "base_bdevs_list": [ 00:06:57.286 { 00:06:57.286 "name": null, 00:06:57.286 "uuid": "7e3e35ae-3805-4b34-83b2-4f08bae2d4fc", 00:06:57.286 "is_configured": false, 00:06:57.286 "data_offset": 0, 00:06:57.286 "data_size": 63488 00:06:57.286 }, 00:06:57.286 { 00:06:57.286 "name": null, 00:06:57.286 "uuid": "47608525-7018-488a-9a0c-7845e9e1a3ef", 00:06:57.286 "is_configured": false, 00:06:57.286 "data_offset": 0, 00:06:57.286 "data_size": 63488 00:06:57.286 }, 00:06:57.286 { 00:06:57.286 "name": "BaseBdev3", 00:06:57.286 "uuid": "9c1429f2-2d2d-4e4e-ab79-66205584c266", 00:06:57.286 "is_configured": true, 00:06:57.286 "data_offset": 2048, 00:06:57.286 "data_size": 63488 00:06:57.286 } 00:06:57.286 ] 00:06:57.286 }' 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.286 14:31:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.855 [2024-10-01 14:31:49.298194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.855 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.855 "name": "Existed_Raid", 00:06:57.855 "uuid": "db629cbe-8efc-4e95-915d-f393ca7e2ab2", 00:06:57.855 "strip_size_kb": 64, 00:06:57.855 "state": "configuring", 00:06:57.855 "raid_level": "raid0", 00:06:57.855 "superblock": true, 00:06:57.855 "num_base_bdevs": 3, 00:06:57.855 "num_base_bdevs_discovered": 2, 00:06:57.855 "num_base_bdevs_operational": 3, 00:06:57.855 "base_bdevs_list": [ 00:06:57.855 { 00:06:57.855 "name": null, 00:06:57.855 "uuid": "7e3e35ae-3805-4b34-83b2-4f08bae2d4fc", 00:06:57.855 "is_configured": false, 00:06:57.855 "data_offset": 0, 00:06:57.855 "data_size": 63488 00:06:57.855 }, 00:06:57.855 { 00:06:57.855 "name": "BaseBdev2", 00:06:57.855 "uuid": "47608525-7018-488a-9a0c-7845e9e1a3ef", 00:06:57.855 "is_configured": true, 00:06:57.855 "data_offset": 2048, 00:06:57.855 "data_size": 63488 00:06:57.855 }, 00:06:57.855 { 00:06:57.855 "name": "BaseBdev3", 00:06:57.855 "uuid": "9c1429f2-2d2d-4e4e-ab79-66205584c266", 00:06:57.855 "is_configured": true, 00:06:57.855 "data_offset": 2048, 00:06:57.855 "data_size": 63488 00:06:57.855 } 00:06:57.855 ] 00:06:57.855 }' 00:06:57.856 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.856 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7e3e35ae-3805-4b34-83b2-4f08bae2d4fc 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.116 [2024-10-01 14:31:49.692755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:06:58.116 [2024-10-01 14:31:49.692958] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:06:58.116 [2024-10-01 14:31:49.692974] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:06:58.116 [2024-10-01 14:31:49.693219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:58.116 [2024-10-01 14:31:49.693336] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:06:58.116 [2024-10-01 14:31:49.693344] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:06:58.116 NewBaseBdev 00:06:58.116 [2024-10-01 14:31:49.693473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.116 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.116 [ 00:06:58.116 { 00:06:58.116 "name": "NewBaseBdev", 00:06:58.116 "aliases": [ 00:06:58.116 "7e3e35ae-3805-4b34-83b2-4f08bae2d4fc" 00:06:58.116 ], 00:06:58.116 "product_name": "Malloc disk", 00:06:58.116 "block_size": 512, 00:06:58.116 "num_blocks": 65536, 00:06:58.116 "uuid": "7e3e35ae-3805-4b34-83b2-4f08bae2d4fc", 00:06:58.116 "assigned_rate_limits": { 00:06:58.116 "rw_ios_per_sec": 0, 00:06:58.116 "rw_mbytes_per_sec": 0, 00:06:58.116 "r_mbytes_per_sec": 0, 00:06:58.116 "w_mbytes_per_sec": 0 00:06:58.116 }, 00:06:58.116 "claimed": true, 00:06:58.116 "claim_type": "exclusive_write", 00:06:58.116 "zoned": false, 00:06:58.116 "supported_io_types": { 00:06:58.116 "read": true, 00:06:58.116 "write": true, 00:06:58.116 "unmap": true, 00:06:58.116 "flush": true, 00:06:58.116 "reset": true, 00:06:58.116 "nvme_admin": false, 00:06:58.116 "nvme_io": false, 00:06:58.116 "nvme_io_md": false, 00:06:58.116 "write_zeroes": true, 00:06:58.116 "zcopy": true, 00:06:58.116 "get_zone_info": false, 00:06:58.116 "zone_management": false, 00:06:58.116 "zone_append": false, 00:06:58.116 "compare": false, 00:06:58.116 "compare_and_write": false, 00:06:58.116 "abort": true, 00:06:58.116 "seek_hole": false, 00:06:58.117 "seek_data": false, 00:06:58.117 "copy": true, 00:06:58.117 "nvme_iov_md": false 00:06:58.117 }, 00:06:58.117 "memory_domains": [ 00:06:58.117 { 00:06:58.117 "dma_device_id": "system", 00:06:58.117 "dma_device_type": 1 00:06:58.117 }, 00:06:58.117 { 00:06:58.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.117 "dma_device_type": 2 00:06:58.117 } 00:06:58.117 ], 00:06:58.117 "driver_specific": {} 00:06:58.117 } 00:06:58.117 ] 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.117 "name": "Existed_Raid", 00:06:58.117 "uuid": "db629cbe-8efc-4e95-915d-f393ca7e2ab2", 00:06:58.117 "strip_size_kb": 64, 00:06:58.117 "state": "online", 00:06:58.117 "raid_level": "raid0", 00:06:58.117 "superblock": true, 00:06:58.117 "num_base_bdevs": 3, 00:06:58.117 "num_base_bdevs_discovered": 3, 00:06:58.117 "num_base_bdevs_operational": 3, 00:06:58.117 "base_bdevs_list": [ 00:06:58.117 { 00:06:58.117 "name": "NewBaseBdev", 00:06:58.117 "uuid": "7e3e35ae-3805-4b34-83b2-4f08bae2d4fc", 00:06:58.117 "is_configured": true, 00:06:58.117 "data_offset": 2048, 00:06:58.117 "data_size": 63488 00:06:58.117 }, 00:06:58.117 { 00:06:58.117 "name": "BaseBdev2", 00:06:58.117 "uuid": "47608525-7018-488a-9a0c-7845e9e1a3ef", 00:06:58.117 "is_configured": true, 00:06:58.117 "data_offset": 2048, 00:06:58.117 "data_size": 63488 00:06:58.117 }, 00:06:58.117 { 00:06:58.117 "name": "BaseBdev3", 00:06:58.117 "uuid": "9c1429f2-2d2d-4e4e-ab79-66205584c266", 00:06:58.117 "is_configured": true, 00:06:58.117 "data_offset": 2048, 00:06:58.117 "data_size": 63488 00:06:58.117 } 00:06:58.117 ] 00:06:58.117 }' 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.117 14:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.377 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:06:58.377 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:58.378 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:58.378 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:58.378 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:58.378 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:58.378 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:58.378 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:58.378 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.378 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.378 [2024-10-01 14:31:50.053268] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:58.637 "name": "Existed_Raid", 00:06:58.637 "aliases": [ 00:06:58.637 "db629cbe-8efc-4e95-915d-f393ca7e2ab2" 00:06:58.637 ], 00:06:58.637 "product_name": "Raid Volume", 00:06:58.637 "block_size": 512, 00:06:58.637 "num_blocks": 190464, 00:06:58.637 "uuid": "db629cbe-8efc-4e95-915d-f393ca7e2ab2", 00:06:58.637 "assigned_rate_limits": { 00:06:58.637 "rw_ios_per_sec": 0, 00:06:58.637 "rw_mbytes_per_sec": 0, 00:06:58.637 "r_mbytes_per_sec": 0, 00:06:58.637 "w_mbytes_per_sec": 0 00:06:58.637 }, 00:06:58.637 "claimed": false, 00:06:58.637 "zoned": false, 00:06:58.637 "supported_io_types": { 00:06:58.637 "read": true, 00:06:58.637 "write": true, 00:06:58.637 "unmap": true, 00:06:58.637 "flush": true, 00:06:58.637 "reset": true, 00:06:58.637 "nvme_admin": false, 00:06:58.637 "nvme_io": false, 00:06:58.637 "nvme_io_md": false, 00:06:58.637 "write_zeroes": true, 00:06:58.637 "zcopy": false, 00:06:58.637 "get_zone_info": false, 00:06:58.637 "zone_management": false, 00:06:58.637 "zone_append": false, 00:06:58.637 "compare": false, 00:06:58.637 "compare_and_write": false, 00:06:58.637 "abort": false, 00:06:58.637 "seek_hole": false, 00:06:58.637 "seek_data": false, 00:06:58.637 "copy": false, 00:06:58.637 "nvme_iov_md": false 00:06:58.637 }, 00:06:58.637 "memory_domains": [ 00:06:58.637 { 00:06:58.637 "dma_device_id": "system", 00:06:58.637 "dma_device_type": 1 00:06:58.637 }, 00:06:58.637 { 00:06:58.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.637 "dma_device_type": 2 00:06:58.637 }, 00:06:58.637 { 00:06:58.637 "dma_device_id": "system", 00:06:58.637 "dma_device_type": 1 00:06:58.637 }, 00:06:58.637 { 00:06:58.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.637 "dma_device_type": 2 00:06:58.637 }, 00:06:58.637 { 00:06:58.637 "dma_device_id": "system", 00:06:58.637 "dma_device_type": 1 00:06:58.637 }, 00:06:58.637 { 00:06:58.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.637 "dma_device_type": 2 00:06:58.637 } 00:06:58.637 ], 00:06:58.637 "driver_specific": { 00:06:58.637 "raid": { 00:06:58.637 "uuid": "db629cbe-8efc-4e95-915d-f393ca7e2ab2", 00:06:58.637 "strip_size_kb": 64, 00:06:58.637 "state": "online", 00:06:58.637 "raid_level": "raid0", 00:06:58.637 "superblock": true, 00:06:58.637 "num_base_bdevs": 3, 00:06:58.637 "num_base_bdevs_discovered": 3, 00:06:58.637 "num_base_bdevs_operational": 3, 00:06:58.637 "base_bdevs_list": [ 00:06:58.637 { 00:06:58.637 "name": "NewBaseBdev", 00:06:58.637 "uuid": "7e3e35ae-3805-4b34-83b2-4f08bae2d4fc", 00:06:58.637 "is_configured": true, 00:06:58.637 "data_offset": 2048, 00:06:58.637 "data_size": 63488 00:06:58.637 }, 00:06:58.637 { 00:06:58.637 "name": "BaseBdev2", 00:06:58.637 "uuid": "47608525-7018-488a-9a0c-7845e9e1a3ef", 00:06:58.637 "is_configured": true, 00:06:58.637 "data_offset": 2048, 00:06:58.637 "data_size": 63488 00:06:58.637 }, 00:06:58.637 { 00:06:58.637 "name": "BaseBdev3", 00:06:58.637 "uuid": "9c1429f2-2d2d-4e4e-ab79-66205584c266", 00:06:58.637 "is_configured": true, 00:06:58.637 "data_offset": 2048, 00:06:58.637 "data_size": 63488 00:06:58.637 } 00:06:58.637 ] 00:06:58.637 } 00:06:58.637 } 00:06:58.637 }' 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:06:58.637 BaseBdev2 00:06:58.637 BaseBdev3' 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.637 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.638 [2024-10-01 14:31:50.248966] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:58.638 [2024-10-01 14:31:50.248996] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:58.638 [2024-10-01 14:31:50.249066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.638 [2024-10-01 14:31:50.249120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.638 [2024-10-01 14:31:50.249133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63157 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 63157 ']' 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 63157 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63157 00:06:58.638 killing process with pid 63157 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63157' 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 63157 00:06:58.638 [2024-10-01 14:31:50.282366] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.638 14:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 63157 00:06:58.896 [2024-10-01 14:31:50.473188] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.893 ************************************ 00:06:59.893 END TEST raid_state_function_test_sb 00:06:59.893 ************************************ 00:06:59.893 14:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:59.893 00:06:59.893 real 0m8.188s 00:06:59.893 user 0m12.993s 00:06:59.893 sys 0m1.313s 00:06:59.893 14:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.893 14:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.893 14:31:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:06:59.893 14:31:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:59.893 14:31:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.893 14:31:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.893 ************************************ 00:06:59.893 START TEST raid_superblock_test 00:06:59.893 ************************************ 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63755 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63755 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63755 ']' 00:06:59.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.893 14:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.893 [2024-10-01 14:31:51.456355] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:59.893 [2024-10-01 14:31:51.456546] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63755 ] 00:07:00.185 [2024-10-01 14:31:51.624242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.443 [2024-10-01 14:31:51.893674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.443 [2024-10-01 14:31:52.033075] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.443 [2024-10-01 14:31:52.033108] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.703 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.703 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:00.703 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:00.703 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:00.703 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:00.703 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:00.703 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:00.703 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:00.703 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:00.703 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:00.703 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:00.703 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.703 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.963 malloc1 00:07:00.963 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.963 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.964 [2024-10-01 14:31:52.401722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:00.964 [2024-10-01 14:31:52.401792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.964 [2024-10-01 14:31:52.401812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:00.964 [2024-10-01 14:31:52.401826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.964 [2024-10-01 14:31:52.404019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.964 [2024-10-01 14:31:52.404056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:00.964 pt1 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.964 malloc2 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.964 [2024-10-01 14:31:52.450977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:00.964 [2024-10-01 14:31:52.451035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.964 [2024-10-01 14:31:52.451057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:00.964 [2024-10-01 14:31:52.451066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.964 [2024-10-01 14:31:52.453212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.964 [2024-10-01 14:31:52.453249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:00.964 pt2 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.964 malloc3 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.964 [2024-10-01 14:31:52.487779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:00.964 [2024-10-01 14:31:52.487840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.964 [2024-10-01 14:31:52.487859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:00.964 [2024-10-01 14:31:52.487868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.964 [2024-10-01 14:31:52.490055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.964 [2024-10-01 14:31:52.490094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:00.964 pt3 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.964 [2024-10-01 14:31:52.499871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:00.964 [2024-10-01 14:31:52.501778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:00.964 [2024-10-01 14:31:52.501849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:00.964 [2024-10-01 14:31:52.502012] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:00.964 [2024-10-01 14:31:52.502024] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:00.964 [2024-10-01 14:31:52.502296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:00.964 [2024-10-01 14:31:52.502444] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:00.964 [2024-10-01 14:31:52.502453] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:00.964 [2024-10-01 14:31:52.502607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.964 "name": "raid_bdev1", 00:07:00.964 "uuid": "49558a11-804e-4327-951d-444e33e544f0", 00:07:00.964 "strip_size_kb": 64, 00:07:00.964 "state": "online", 00:07:00.964 "raid_level": "raid0", 00:07:00.964 "superblock": true, 00:07:00.964 "num_base_bdevs": 3, 00:07:00.964 "num_base_bdevs_discovered": 3, 00:07:00.964 "num_base_bdevs_operational": 3, 00:07:00.964 "base_bdevs_list": [ 00:07:00.964 { 00:07:00.964 "name": "pt1", 00:07:00.964 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:00.964 "is_configured": true, 00:07:00.964 "data_offset": 2048, 00:07:00.964 "data_size": 63488 00:07:00.964 }, 00:07:00.964 { 00:07:00.964 "name": "pt2", 00:07:00.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:00.964 "is_configured": true, 00:07:00.964 "data_offset": 2048, 00:07:00.964 "data_size": 63488 00:07:00.964 }, 00:07:00.964 { 00:07:00.964 "name": "pt3", 00:07:00.964 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:00.964 "is_configured": true, 00:07:00.964 "data_offset": 2048, 00:07:00.964 "data_size": 63488 00:07:00.964 } 00:07:00.964 ] 00:07:00.964 }' 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.964 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.224 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:01.224 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:01.224 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:01.224 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:01.224 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:01.224 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:01.224 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:01.224 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.224 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.224 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:01.224 [2024-10-01 14:31:52.844204] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.224 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.224 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:01.224 "name": "raid_bdev1", 00:07:01.224 "aliases": [ 00:07:01.224 "49558a11-804e-4327-951d-444e33e544f0" 00:07:01.224 ], 00:07:01.224 "product_name": "Raid Volume", 00:07:01.224 "block_size": 512, 00:07:01.224 "num_blocks": 190464, 00:07:01.224 "uuid": "49558a11-804e-4327-951d-444e33e544f0", 00:07:01.224 "assigned_rate_limits": { 00:07:01.224 "rw_ios_per_sec": 0, 00:07:01.224 "rw_mbytes_per_sec": 0, 00:07:01.224 "r_mbytes_per_sec": 0, 00:07:01.224 "w_mbytes_per_sec": 0 00:07:01.224 }, 00:07:01.224 "claimed": false, 00:07:01.224 "zoned": false, 00:07:01.224 "supported_io_types": { 00:07:01.224 "read": true, 00:07:01.224 "write": true, 00:07:01.224 "unmap": true, 00:07:01.224 "flush": true, 00:07:01.224 "reset": true, 00:07:01.224 "nvme_admin": false, 00:07:01.224 "nvme_io": false, 00:07:01.224 "nvme_io_md": false, 00:07:01.224 "write_zeroes": true, 00:07:01.224 "zcopy": false, 00:07:01.224 "get_zone_info": false, 00:07:01.224 "zone_management": false, 00:07:01.224 "zone_append": false, 00:07:01.224 "compare": false, 00:07:01.224 "compare_and_write": false, 00:07:01.224 "abort": false, 00:07:01.224 "seek_hole": false, 00:07:01.224 "seek_data": false, 00:07:01.224 "copy": false, 00:07:01.224 "nvme_iov_md": false 00:07:01.224 }, 00:07:01.224 "memory_domains": [ 00:07:01.224 { 00:07:01.224 "dma_device_id": "system", 00:07:01.224 "dma_device_type": 1 00:07:01.224 }, 00:07:01.224 { 00:07:01.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.224 "dma_device_type": 2 00:07:01.224 }, 00:07:01.224 { 00:07:01.224 "dma_device_id": "system", 00:07:01.224 "dma_device_type": 1 00:07:01.224 }, 00:07:01.224 { 00:07:01.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.224 "dma_device_type": 2 00:07:01.224 }, 00:07:01.224 { 00:07:01.224 "dma_device_id": "system", 00:07:01.224 "dma_device_type": 1 00:07:01.224 }, 00:07:01.224 { 00:07:01.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.224 "dma_device_type": 2 00:07:01.224 } 00:07:01.224 ], 00:07:01.224 "driver_specific": { 00:07:01.224 "raid": { 00:07:01.224 "uuid": "49558a11-804e-4327-951d-444e33e544f0", 00:07:01.224 "strip_size_kb": 64, 00:07:01.224 "state": "online", 00:07:01.224 "raid_level": "raid0", 00:07:01.224 "superblock": true, 00:07:01.224 "num_base_bdevs": 3, 00:07:01.224 "num_base_bdevs_discovered": 3, 00:07:01.224 "num_base_bdevs_operational": 3, 00:07:01.224 "base_bdevs_list": [ 00:07:01.224 { 00:07:01.224 "name": "pt1", 00:07:01.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:01.224 "is_configured": true, 00:07:01.224 "data_offset": 2048, 00:07:01.224 "data_size": 63488 00:07:01.224 }, 00:07:01.224 { 00:07:01.224 "name": "pt2", 00:07:01.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:01.224 "is_configured": true, 00:07:01.224 "data_offset": 2048, 00:07:01.224 "data_size": 63488 00:07:01.224 }, 00:07:01.224 { 00:07:01.224 "name": "pt3", 00:07:01.224 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:01.224 "is_configured": true, 00:07:01.224 "data_offset": 2048, 00:07:01.224 "data_size": 63488 00:07:01.224 } 00:07:01.224 ] 00:07:01.224 } 00:07:01.224 } 00:07:01.224 }' 00:07:01.224 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:01.483 pt2 00:07:01.483 pt3' 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.483 14:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.484 14:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.484 [2024-10-01 14:31:53.072211] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=49558a11-804e-4327-951d-444e33e544f0 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 49558a11-804e-4327-951d-444e33e544f0 ']' 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.484 [2024-10-01 14:31:53.115919] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:01.484 [2024-10-01 14:31:53.115956] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.484 [2024-10-01 14:31:53.116022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.484 [2024-10-01 14:31:53.116082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.484 [2024-10-01 14:31:53.116091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.484 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:01.745 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.746 [2024-10-01 14:31:53.227993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:01.746 [2024-10-01 14:31:53.229901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:01.746 [2024-10-01 14:31:53.229956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:01.746 [2024-10-01 14:31:53.230004] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:01.746 [2024-10-01 14:31:53.230050] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:01.746 [2024-10-01 14:31:53.230070] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:01.746 [2024-10-01 14:31:53.230086] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:01.746 [2024-10-01 14:31:53.230095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:01.746 request: 00:07:01.746 { 00:07:01.746 "name": "raid_bdev1", 00:07:01.746 "raid_level": "raid0", 00:07:01.746 "base_bdevs": [ 00:07:01.746 "malloc1", 00:07:01.746 "malloc2", 00:07:01.746 "malloc3" 00:07:01.746 ], 00:07:01.746 "strip_size_kb": 64, 00:07:01.746 "superblock": false, 00:07:01.746 "method": "bdev_raid_create", 00:07:01.746 "req_id": 1 00:07:01.746 } 00:07:01.746 Got JSON-RPC error response 00:07:01.746 response: 00:07:01.746 { 00:07:01.746 "code": -17, 00:07:01.746 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:01.746 } 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.746 [2024-10-01 14:31:53.279961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:01.746 [2024-10-01 14:31:53.280021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:01.746 [2024-10-01 14:31:53.280039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:01.746 [2024-10-01 14:31:53.280047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:01.746 [2024-10-01 14:31:53.282232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:01.746 [2024-10-01 14:31:53.282269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:01.746 [2024-10-01 14:31:53.282348] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:01.746 [2024-10-01 14:31:53.282396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:01.746 pt1 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.746 "name": "raid_bdev1", 00:07:01.746 "uuid": "49558a11-804e-4327-951d-444e33e544f0", 00:07:01.746 "strip_size_kb": 64, 00:07:01.746 "state": "configuring", 00:07:01.746 "raid_level": "raid0", 00:07:01.746 "superblock": true, 00:07:01.746 "num_base_bdevs": 3, 00:07:01.746 "num_base_bdevs_discovered": 1, 00:07:01.746 "num_base_bdevs_operational": 3, 00:07:01.746 "base_bdevs_list": [ 00:07:01.746 { 00:07:01.746 "name": "pt1", 00:07:01.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:01.746 "is_configured": true, 00:07:01.746 "data_offset": 2048, 00:07:01.746 "data_size": 63488 00:07:01.746 }, 00:07:01.746 { 00:07:01.746 "name": null, 00:07:01.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:01.746 "is_configured": false, 00:07:01.746 "data_offset": 2048, 00:07:01.746 "data_size": 63488 00:07:01.746 }, 00:07:01.746 { 00:07:01.746 "name": null, 00:07:01.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:01.746 "is_configured": false, 00:07:01.746 "data_offset": 2048, 00:07:01.746 "data_size": 63488 00:07:01.746 } 00:07:01.746 ] 00:07:01.746 }' 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.746 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.008 [2024-10-01 14:31:53.676072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:02.008 [2024-10-01 14:31:53.676141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.008 [2024-10-01 14:31:53.676162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:07:02.008 [2024-10-01 14:31:53.676172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.008 [2024-10-01 14:31:53.676573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.008 [2024-10-01 14:31:53.676588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:02.008 [2024-10-01 14:31:53.676662] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:02.008 [2024-10-01 14:31:53.676682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:02.008 pt2 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.008 [2024-10-01 14:31:53.684109] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.008 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.270 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.270 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:02.270 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.270 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.270 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.270 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.270 "name": "raid_bdev1", 00:07:02.270 "uuid": "49558a11-804e-4327-951d-444e33e544f0", 00:07:02.270 "strip_size_kb": 64, 00:07:02.270 "state": "configuring", 00:07:02.270 "raid_level": "raid0", 00:07:02.270 "superblock": true, 00:07:02.270 "num_base_bdevs": 3, 00:07:02.270 "num_base_bdevs_discovered": 1, 00:07:02.270 "num_base_bdevs_operational": 3, 00:07:02.270 "base_bdevs_list": [ 00:07:02.270 { 00:07:02.270 "name": "pt1", 00:07:02.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:02.270 "is_configured": true, 00:07:02.270 "data_offset": 2048, 00:07:02.270 "data_size": 63488 00:07:02.270 }, 00:07:02.270 { 00:07:02.270 "name": null, 00:07:02.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:02.270 "is_configured": false, 00:07:02.270 "data_offset": 0, 00:07:02.270 "data_size": 63488 00:07:02.270 }, 00:07:02.270 { 00:07:02.270 "name": null, 00:07:02.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:02.270 "is_configured": false, 00:07:02.270 "data_offset": 2048, 00:07:02.270 "data_size": 63488 00:07:02.270 } 00:07:02.270 ] 00:07:02.270 }' 00:07:02.270 14:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.270 14:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.531 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:02.531 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:02.531 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:02.531 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.531 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.531 [2024-10-01 14:31:54.048141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:02.531 [2024-10-01 14:31:54.048211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.531 [2024-10-01 14:31:54.048228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:07:02.531 [2024-10-01 14:31:54.048238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.531 [2024-10-01 14:31:54.048648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.531 [2024-10-01 14:31:54.048678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:02.531 [2024-10-01 14:31:54.048757] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:02.531 [2024-10-01 14:31:54.048786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:02.531 pt2 00:07:02.531 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.531 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:02.531 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:02.531 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:02.531 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.531 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.531 [2024-10-01 14:31:54.056172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:02.531 [2024-10-01 14:31:54.056223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.531 [2024-10-01 14:31:54.056240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:07:02.531 [2024-10-01 14:31:54.056251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.531 [2024-10-01 14:31:54.056636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.531 [2024-10-01 14:31:54.056661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:02.531 [2024-10-01 14:31:54.056741] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:02.531 [2024-10-01 14:31:54.056763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:02.531 [2024-10-01 14:31:54.056879] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:02.531 [2024-10-01 14:31:54.056894] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:02.531 [2024-10-01 14:31:54.057143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:02.532 [2024-10-01 14:31:54.057277] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:02.532 [2024-10-01 14:31:54.057286] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:02.532 [2024-10-01 14:31:54.057410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.532 pt3 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.532 "name": "raid_bdev1", 00:07:02.532 "uuid": "49558a11-804e-4327-951d-444e33e544f0", 00:07:02.532 "strip_size_kb": 64, 00:07:02.532 "state": "online", 00:07:02.532 "raid_level": "raid0", 00:07:02.532 "superblock": true, 00:07:02.532 "num_base_bdevs": 3, 00:07:02.532 "num_base_bdevs_discovered": 3, 00:07:02.532 "num_base_bdevs_operational": 3, 00:07:02.532 "base_bdevs_list": [ 00:07:02.532 { 00:07:02.532 "name": "pt1", 00:07:02.532 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:02.532 "is_configured": true, 00:07:02.532 "data_offset": 2048, 00:07:02.532 "data_size": 63488 00:07:02.532 }, 00:07:02.532 { 00:07:02.532 "name": "pt2", 00:07:02.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:02.532 "is_configured": true, 00:07:02.532 "data_offset": 2048, 00:07:02.532 "data_size": 63488 00:07:02.532 }, 00:07:02.532 { 00:07:02.532 "name": "pt3", 00:07:02.532 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:02.532 "is_configured": true, 00:07:02.532 "data_offset": 2048, 00:07:02.532 "data_size": 63488 00:07:02.532 } 00:07:02.532 ] 00:07:02.532 }' 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.532 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.792 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:02.792 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:02.792 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:02.792 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:02.792 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:02.792 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:02.792 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:02.792 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:02.792 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.792 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.792 [2024-10-01 14:31:54.420554] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.792 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.792 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:02.792 "name": "raid_bdev1", 00:07:02.792 "aliases": [ 00:07:02.792 "49558a11-804e-4327-951d-444e33e544f0" 00:07:02.792 ], 00:07:02.792 "product_name": "Raid Volume", 00:07:02.792 "block_size": 512, 00:07:02.792 "num_blocks": 190464, 00:07:02.792 "uuid": "49558a11-804e-4327-951d-444e33e544f0", 00:07:02.792 "assigned_rate_limits": { 00:07:02.792 "rw_ios_per_sec": 0, 00:07:02.792 "rw_mbytes_per_sec": 0, 00:07:02.792 "r_mbytes_per_sec": 0, 00:07:02.792 "w_mbytes_per_sec": 0 00:07:02.792 }, 00:07:02.792 "claimed": false, 00:07:02.792 "zoned": false, 00:07:02.792 "supported_io_types": { 00:07:02.792 "read": true, 00:07:02.792 "write": true, 00:07:02.792 "unmap": true, 00:07:02.792 "flush": true, 00:07:02.792 "reset": true, 00:07:02.792 "nvme_admin": false, 00:07:02.792 "nvme_io": false, 00:07:02.792 "nvme_io_md": false, 00:07:02.792 "write_zeroes": true, 00:07:02.792 "zcopy": false, 00:07:02.792 "get_zone_info": false, 00:07:02.792 "zone_management": false, 00:07:02.792 "zone_append": false, 00:07:02.792 "compare": false, 00:07:02.792 "compare_and_write": false, 00:07:02.792 "abort": false, 00:07:02.792 "seek_hole": false, 00:07:02.792 "seek_data": false, 00:07:02.792 "copy": false, 00:07:02.792 "nvme_iov_md": false 00:07:02.792 }, 00:07:02.792 "memory_domains": [ 00:07:02.792 { 00:07:02.792 "dma_device_id": "system", 00:07:02.792 "dma_device_type": 1 00:07:02.792 }, 00:07:02.792 { 00:07:02.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.792 "dma_device_type": 2 00:07:02.792 }, 00:07:02.792 { 00:07:02.792 "dma_device_id": "system", 00:07:02.792 "dma_device_type": 1 00:07:02.792 }, 00:07:02.792 { 00:07:02.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.792 "dma_device_type": 2 00:07:02.792 }, 00:07:02.792 { 00:07:02.792 "dma_device_id": "system", 00:07:02.792 "dma_device_type": 1 00:07:02.792 }, 00:07:02.792 { 00:07:02.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.792 "dma_device_type": 2 00:07:02.792 } 00:07:02.792 ], 00:07:02.792 "driver_specific": { 00:07:02.792 "raid": { 00:07:02.792 "uuid": "49558a11-804e-4327-951d-444e33e544f0", 00:07:02.792 "strip_size_kb": 64, 00:07:02.792 "state": "online", 00:07:02.792 "raid_level": "raid0", 00:07:02.792 "superblock": true, 00:07:02.792 "num_base_bdevs": 3, 00:07:02.792 "num_base_bdevs_discovered": 3, 00:07:02.792 "num_base_bdevs_operational": 3, 00:07:02.792 "base_bdevs_list": [ 00:07:02.792 { 00:07:02.792 "name": "pt1", 00:07:02.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:02.792 "is_configured": true, 00:07:02.792 "data_offset": 2048, 00:07:02.792 "data_size": 63488 00:07:02.792 }, 00:07:02.792 { 00:07:02.792 "name": "pt2", 00:07:02.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:02.792 "is_configured": true, 00:07:02.792 "data_offset": 2048, 00:07:02.792 "data_size": 63488 00:07:02.792 }, 00:07:02.792 { 00:07:02.792 "name": "pt3", 00:07:02.792 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:02.792 "is_configured": true, 00:07:02.792 "data_offset": 2048, 00:07:02.792 "data_size": 63488 00:07:02.792 } 00:07:02.792 ] 00:07:02.792 } 00:07:02.792 } 00:07:02.792 }' 00:07:02.793 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:03.054 pt2 00:07:03.054 pt3' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:03.054 [2024-10-01 14:31:54.600565] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 49558a11-804e-4327-951d-444e33e544f0 '!=' 49558a11-804e-4327-951d-444e33e544f0 ']' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63755 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63755 ']' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63755 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63755 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.054 killing process with pid 63755 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63755' 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63755 00:07:03.054 [2024-10-01 14:31:54.651044] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.054 14:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63755 00:07:03.054 [2024-10-01 14:31:54.651124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.054 [2024-10-01 14:31:54.651183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.054 [2024-10-01 14:31:54.651198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:03.315 [2024-10-01 14:31:54.844047] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.255 14:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:04.255 00:07:04.255 real 0m4.305s 00:07:04.255 user 0m6.250s 00:07:04.255 sys 0m0.625s 00:07:04.255 14:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.255 14:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.255 ************************************ 00:07:04.255 END TEST raid_superblock_test 00:07:04.255 ************************************ 00:07:04.255 14:31:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:07:04.255 14:31:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:04.255 14:31:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.255 14:31:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.255 ************************************ 00:07:04.255 START TEST raid_read_error_test 00:07:04.255 ************************************ 00:07:04.255 14:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:07:04.255 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:04.255 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:04.255 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ln1JrzmltV 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64003 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64003 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 64003 ']' 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.256 14:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:04.256 [2024-10-01 14:31:55.818927] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:04.256 [2024-10-01 14:31:55.819059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64003 ] 00:07:04.515 [2024-10-01 14:31:55.972650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.515 [2024-10-01 14:31:56.169800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.774 [2024-10-01 14:31:56.319227] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.774 [2024-10-01 14:31:56.319272] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.347 BaseBdev1_malloc 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.347 true 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.347 [2024-10-01 14:31:56.782810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:05.347 [2024-10-01 14:31:56.782867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.347 [2024-10-01 14:31:56.782886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:05.347 [2024-10-01 14:31:56.782898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.347 [2024-10-01 14:31:56.785095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.347 [2024-10-01 14:31:56.785136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:05.347 BaseBdev1 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.347 BaseBdev2_malloc 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.347 true 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.347 [2024-10-01 14:31:56.841475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:05.347 [2024-10-01 14:31:56.841533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.347 [2024-10-01 14:31:56.841552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:05.347 [2024-10-01 14:31:56.841562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.347 [2024-10-01 14:31:56.843723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.347 [2024-10-01 14:31:56.843762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:05.347 BaseBdev2 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.347 BaseBdev3_malloc 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.347 true 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.347 [2024-10-01 14:31:56.889949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:05.347 [2024-10-01 14:31:56.890007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.347 [2024-10-01 14:31:56.890026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:05.347 [2024-10-01 14:31:56.890036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.347 [2024-10-01 14:31:56.892235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.347 [2024-10-01 14:31:56.892272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:05.347 BaseBdev3 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.347 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.348 [2024-10-01 14:31:56.902051] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:05.348 [2024-10-01 14:31:56.903942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:05.348 [2024-10-01 14:31:56.904025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:05.348 [2024-10-01 14:31:56.904230] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:05.348 [2024-10-01 14:31:56.904241] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:05.348 [2024-10-01 14:31:56.904519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:05.348 [2024-10-01 14:31:56.904673] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:05.348 [2024-10-01 14:31:56.904684] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:05.348 [2024-10-01 14:31:56.904850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.348 "name": "raid_bdev1", 00:07:05.348 "uuid": "a7466e3f-d7bd-47c8-b326-eeab4ef21df2", 00:07:05.348 "strip_size_kb": 64, 00:07:05.348 "state": "online", 00:07:05.348 "raid_level": "raid0", 00:07:05.348 "superblock": true, 00:07:05.348 "num_base_bdevs": 3, 00:07:05.348 "num_base_bdevs_discovered": 3, 00:07:05.348 "num_base_bdevs_operational": 3, 00:07:05.348 "base_bdevs_list": [ 00:07:05.348 { 00:07:05.348 "name": "BaseBdev1", 00:07:05.348 "uuid": "bec475c5-95df-58d8-aa4c-bce220336465", 00:07:05.348 "is_configured": true, 00:07:05.348 "data_offset": 2048, 00:07:05.348 "data_size": 63488 00:07:05.348 }, 00:07:05.348 { 00:07:05.348 "name": "BaseBdev2", 00:07:05.348 "uuid": "07386e30-9995-5037-9619-fdf3d394c7ce", 00:07:05.348 "is_configured": true, 00:07:05.348 "data_offset": 2048, 00:07:05.348 "data_size": 63488 00:07:05.348 }, 00:07:05.348 { 00:07:05.348 "name": "BaseBdev3", 00:07:05.348 "uuid": "28cbd255-f2d4-5d35-8d90-21b9f99b54f1", 00:07:05.348 "is_configured": true, 00:07:05.348 "data_offset": 2048, 00:07:05.348 "data_size": 63488 00:07:05.348 } 00:07:05.348 ] 00:07:05.348 }' 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.348 14:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.609 14:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:05.609 14:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:05.871 [2024-10-01 14:31:57.315066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.815 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.815 "name": "raid_bdev1", 00:07:06.815 "uuid": "a7466e3f-d7bd-47c8-b326-eeab4ef21df2", 00:07:06.815 "strip_size_kb": 64, 00:07:06.815 "state": "online", 00:07:06.815 "raid_level": "raid0", 00:07:06.815 "superblock": true, 00:07:06.815 "num_base_bdevs": 3, 00:07:06.815 "num_base_bdevs_discovered": 3, 00:07:06.815 "num_base_bdevs_operational": 3, 00:07:06.815 "base_bdevs_list": [ 00:07:06.815 { 00:07:06.815 "name": "BaseBdev1", 00:07:06.815 "uuid": "bec475c5-95df-58d8-aa4c-bce220336465", 00:07:06.815 "is_configured": true, 00:07:06.815 "data_offset": 2048, 00:07:06.815 "data_size": 63488 00:07:06.815 }, 00:07:06.815 { 00:07:06.815 "name": "BaseBdev2", 00:07:06.815 "uuid": "07386e30-9995-5037-9619-fdf3d394c7ce", 00:07:06.815 "is_configured": true, 00:07:06.815 "data_offset": 2048, 00:07:06.815 "data_size": 63488 00:07:06.815 }, 00:07:06.815 { 00:07:06.816 "name": "BaseBdev3", 00:07:06.816 "uuid": "28cbd255-f2d4-5d35-8d90-21b9f99b54f1", 00:07:06.816 "is_configured": true, 00:07:06.816 "data_offset": 2048, 00:07:06.816 "data_size": 63488 00:07:06.816 } 00:07:06.816 ] 00:07:06.816 }' 00:07:06.816 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.816 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.076 [2024-10-01 14:31:58.566833] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:07.076 [2024-10-01 14:31:58.566862] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:07.076 [2024-10-01 14:31:58.569875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.076 [2024-10-01 14:31:58.569921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.076 [2024-10-01 14:31:58.569958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.076 [2024-10-01 14:31:58.569967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:07.076 { 00:07:07.076 "results": [ 00:07:07.076 { 00:07:07.076 "job": "raid_bdev1", 00:07:07.076 "core_mask": "0x1", 00:07:07.076 "workload": "randrw", 00:07:07.076 "percentage": 50, 00:07:07.076 "status": "finished", 00:07:07.076 "queue_depth": 1, 00:07:07.076 "io_size": 131072, 00:07:07.076 "runtime": 1.249913, 00:07:07.076 "iops": 13905.767841441764, 00:07:07.076 "mibps": 1738.2209801802205, 00:07:07.076 "io_failed": 1, 00:07:07.076 "io_timeout": 0, 00:07:07.076 "avg_latency_us": 98.58782436295726, 00:07:07.076 "min_latency_us": 33.28, 00:07:07.076 "max_latency_us": 1714.0184615384615 00:07:07.076 } 00:07:07.076 ], 00:07:07.076 "core_count": 1 00:07:07.076 } 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64003 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 64003 ']' 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 64003 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64003 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.076 killing process with pid 64003 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64003' 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 64003 00:07:07.076 [2024-10-01 14:31:58.599904] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.076 14:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 64003 00:07:07.076 [2024-10-01 14:31:58.741341] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.019 14:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:08.019 14:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ln1JrzmltV 00:07:08.019 14:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:08.019 14:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:07:08.019 14:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:08.019 14:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:08.019 14:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:08.019 ************************************ 00:07:08.019 END TEST raid_read_error_test 00:07:08.019 ************************************ 00:07:08.019 14:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:07:08.019 00:07:08.019 real 0m3.845s 00:07:08.019 user 0m4.582s 00:07:08.019 sys 0m0.422s 00:07:08.019 14:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.019 14:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.019 14:31:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:07:08.019 14:31:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:08.019 14:31:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.019 14:31:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.019 ************************************ 00:07:08.019 START TEST raid_write_error_test 00:07:08.019 ************************************ 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:08.019 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Iw3k9vqRJF 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64143 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64143 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 64143 ']' 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:08.020 14:31:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.281 [2024-10-01 14:31:59.722271] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:08.281 [2024-10-01 14:31:59.722396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64143 ] 00:07:08.281 [2024-10-01 14:31:59.871758] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.542 [2024-10-01 14:32:00.065352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.542 [2024-10-01 14:32:00.201400] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.542 [2024-10-01 14:32:00.201449] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.114 BaseBdev1_malloc 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.114 true 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.114 [2024-10-01 14:32:00.623280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:09.114 [2024-10-01 14:32:00.623346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.114 [2024-10-01 14:32:00.623367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:09.114 [2024-10-01 14:32:00.623378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.114 [2024-10-01 14:32:00.625621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.114 [2024-10-01 14:32:00.625666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:09.114 BaseBdev1 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.114 BaseBdev2_malloc 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.114 true 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.114 [2024-10-01 14:32:00.681958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:09.114 [2024-10-01 14:32:00.682015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.114 [2024-10-01 14:32:00.682032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:09.114 [2024-10-01 14:32:00.682042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.114 [2024-10-01 14:32:00.684162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.114 [2024-10-01 14:32:00.684202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:09.114 BaseBdev2 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:09.114 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.115 BaseBdev3_malloc 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.115 true 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.115 [2024-10-01 14:32:00.730096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:09.115 [2024-10-01 14:32:00.730145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.115 [2024-10-01 14:32:00.730163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:09.115 [2024-10-01 14:32:00.730174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.115 [2024-10-01 14:32:00.732299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.115 [2024-10-01 14:32:00.732335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:09.115 BaseBdev3 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.115 [2024-10-01 14:32:00.738169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.115 [2024-10-01 14:32:00.740015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:09.115 [2024-10-01 14:32:00.740099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:09.115 [2024-10-01 14:32:00.740303] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:09.115 [2024-10-01 14:32:00.740313] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:09.115 [2024-10-01 14:32:00.740580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:09.115 [2024-10-01 14:32:00.740747] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:09.115 [2024-10-01 14:32:00.740759] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:09.115 [2024-10-01 14:32:00.740912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.115 "name": "raid_bdev1", 00:07:09.115 "uuid": "a01a60ce-c4ba-4ef0-b505-261cca80ed4a", 00:07:09.115 "strip_size_kb": 64, 00:07:09.115 "state": "online", 00:07:09.115 "raid_level": "raid0", 00:07:09.115 "superblock": true, 00:07:09.115 "num_base_bdevs": 3, 00:07:09.115 "num_base_bdevs_discovered": 3, 00:07:09.115 "num_base_bdevs_operational": 3, 00:07:09.115 "base_bdevs_list": [ 00:07:09.115 { 00:07:09.115 "name": "BaseBdev1", 00:07:09.115 "uuid": "d6a7b96b-3ba5-5908-b50f-8bd6f2ae2f87", 00:07:09.115 "is_configured": true, 00:07:09.115 "data_offset": 2048, 00:07:09.115 "data_size": 63488 00:07:09.115 }, 00:07:09.115 { 00:07:09.115 "name": "BaseBdev2", 00:07:09.115 "uuid": "88d31c45-fc63-54bc-bdf2-c88ad782cd7b", 00:07:09.115 "is_configured": true, 00:07:09.115 "data_offset": 2048, 00:07:09.115 "data_size": 63488 00:07:09.115 }, 00:07:09.115 { 00:07:09.115 "name": "BaseBdev3", 00:07:09.115 "uuid": "d94910d4-14ca-552f-b68e-66add3585651", 00:07:09.115 "is_configured": true, 00:07:09.115 "data_offset": 2048, 00:07:09.115 "data_size": 63488 00:07:09.115 } 00:07:09.115 ] 00:07:09.115 }' 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.115 14:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.688 14:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:09.688 14:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:09.688 [2024-10-01 14:32:01.143190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:10.628 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:10.628 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.629 "name": "raid_bdev1", 00:07:10.629 "uuid": "a01a60ce-c4ba-4ef0-b505-261cca80ed4a", 00:07:10.629 "strip_size_kb": 64, 00:07:10.629 "state": "online", 00:07:10.629 "raid_level": "raid0", 00:07:10.629 "superblock": true, 00:07:10.629 "num_base_bdevs": 3, 00:07:10.629 "num_base_bdevs_discovered": 3, 00:07:10.629 "num_base_bdevs_operational": 3, 00:07:10.629 "base_bdevs_list": [ 00:07:10.629 { 00:07:10.629 "name": "BaseBdev1", 00:07:10.629 "uuid": "d6a7b96b-3ba5-5908-b50f-8bd6f2ae2f87", 00:07:10.629 "is_configured": true, 00:07:10.629 "data_offset": 2048, 00:07:10.629 "data_size": 63488 00:07:10.629 }, 00:07:10.629 { 00:07:10.629 "name": "BaseBdev2", 00:07:10.629 "uuid": "88d31c45-fc63-54bc-bdf2-c88ad782cd7b", 00:07:10.629 "is_configured": true, 00:07:10.629 "data_offset": 2048, 00:07:10.629 "data_size": 63488 00:07:10.629 }, 00:07:10.629 { 00:07:10.629 "name": "BaseBdev3", 00:07:10.629 "uuid": "d94910d4-14ca-552f-b68e-66add3585651", 00:07:10.629 "is_configured": true, 00:07:10.629 "data_offset": 2048, 00:07:10.629 "data_size": 63488 00:07:10.629 } 00:07:10.629 ] 00:07:10.629 }' 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.629 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.889 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:10.889 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.889 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.889 [2024-10-01 14:32:02.405042] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:10.889 [2024-10-01 14:32:02.405074] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:10.889 [2024-10-01 14:32:02.408162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.889 [2024-10-01 14:32:02.408209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.890 [2024-10-01 14:32:02.408244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.890 [2024-10-01 14:32:02.408253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:10.890 { 00:07:10.890 "results": [ 00:07:10.890 { 00:07:10.890 "job": "raid_bdev1", 00:07:10.890 "core_mask": "0x1", 00:07:10.890 "workload": "randrw", 00:07:10.890 "percentage": 50, 00:07:10.890 "status": "finished", 00:07:10.890 "queue_depth": 1, 00:07:10.890 "io_size": 131072, 00:07:10.890 "runtime": 1.259982, 00:07:10.890 "iops": 14740.686771715786, 00:07:10.890 "mibps": 1842.5858464644732, 00:07:10.890 "io_failed": 1, 00:07:10.890 "io_timeout": 0, 00:07:10.890 "avg_latency_us": 92.98841225534451, 00:07:10.890 "min_latency_us": 21.366153846153846, 00:07:10.890 "max_latency_us": 1701.4153846153847 00:07:10.890 } 00:07:10.890 ], 00:07:10.890 "core_count": 1 00:07:10.890 } 00:07:10.890 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.890 14:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64143 00:07:10.890 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 64143 ']' 00:07:10.890 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 64143 00:07:10.890 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:10.890 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.890 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64143 00:07:10.890 killing process with pid 64143 00:07:10.890 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.890 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.890 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64143' 00:07:10.890 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 64143 00:07:10.890 [2024-10-01 14:32:02.438082] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.890 14:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 64143 00:07:11.150 [2024-10-01 14:32:02.584484] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.094 14:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:12.094 14:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Iw3k9vqRJF 00:07:12.094 14:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:12.094 14:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:07:12.094 14:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:12.094 14:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:12.094 14:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:12.094 14:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:07:12.094 00:07:12.094 real 0m3.807s 00:07:12.094 user 0m4.486s 00:07:12.094 sys 0m0.414s 00:07:12.094 14:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.094 ************************************ 00:07:12.094 END TEST raid_write_error_test 00:07:12.094 ************************************ 00:07:12.094 14:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.094 14:32:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:12.094 14:32:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:07:12.094 14:32:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:12.094 14:32:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.094 14:32:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.094 ************************************ 00:07:12.094 START TEST raid_state_function_test 00:07:12.094 ************************************ 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:12.094 Process raid pid: 64270 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64270 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64270' 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64270 00:07:12.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 64270 ']' 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.094 14:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.094 [2024-10-01 14:32:03.591025] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:12.094 [2024-10-01 14:32:03.591154] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.094 [2024-10-01 14:32:03.742059] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.354 [2024-10-01 14:32:03.931138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.614 [2024-10-01 14:32:04.068798] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.614 [2024-10-01 14:32:04.068842] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.991 [2024-10-01 14:32:04.550965] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:12.991 [2024-10-01 14:32:04.551023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:12.991 [2024-10-01 14:32:04.551033] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.991 [2024-10-01 14:32:04.551042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.991 [2024-10-01 14:32:04.551049] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:12.991 [2024-10-01 14:32:04.551057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.991 "name": "Existed_Raid", 00:07:12.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.991 "strip_size_kb": 64, 00:07:12.991 "state": "configuring", 00:07:12.991 "raid_level": "concat", 00:07:12.991 "superblock": false, 00:07:12.991 "num_base_bdevs": 3, 00:07:12.991 "num_base_bdevs_discovered": 0, 00:07:12.991 "num_base_bdevs_operational": 3, 00:07:12.991 "base_bdevs_list": [ 00:07:12.991 { 00:07:12.991 "name": "BaseBdev1", 00:07:12.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.991 "is_configured": false, 00:07:12.991 "data_offset": 0, 00:07:12.991 "data_size": 0 00:07:12.991 }, 00:07:12.991 { 00:07:12.991 "name": "BaseBdev2", 00:07:12.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.991 "is_configured": false, 00:07:12.991 "data_offset": 0, 00:07:12.991 "data_size": 0 00:07:12.991 }, 00:07:12.991 { 00:07:12.991 "name": "BaseBdev3", 00:07:12.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.991 "is_configured": false, 00:07:12.991 "data_offset": 0, 00:07:12.991 "data_size": 0 00:07:12.991 } 00:07:12.991 ] 00:07:12.991 }' 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.991 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.251 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:13.251 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.251 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.251 [2024-10-01 14:32:04.906955] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:13.252 [2024-10-01 14:32:04.906992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:13.252 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.252 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:13.252 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.252 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.252 [2024-10-01 14:32:04.919006] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.252 [2024-10-01 14:32:04.919055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.252 [2024-10-01 14:32:04.919064] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.252 [2024-10-01 14:32:04.919073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.252 [2024-10-01 14:32:04.919080] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:13.252 [2024-10-01 14:32:04.919089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:13.252 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.252 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:13.252 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.252 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.513 [2024-10-01 14:32:04.970792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.513 BaseBdev1 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.513 [ 00:07:13.513 { 00:07:13.513 "name": "BaseBdev1", 00:07:13.513 "aliases": [ 00:07:13.513 "5e5ba9ad-bab6-49f6-8857-9bea6a63fba0" 00:07:13.513 ], 00:07:13.513 "product_name": "Malloc disk", 00:07:13.513 "block_size": 512, 00:07:13.513 "num_blocks": 65536, 00:07:13.513 "uuid": "5e5ba9ad-bab6-49f6-8857-9bea6a63fba0", 00:07:13.513 "assigned_rate_limits": { 00:07:13.513 "rw_ios_per_sec": 0, 00:07:13.513 "rw_mbytes_per_sec": 0, 00:07:13.513 "r_mbytes_per_sec": 0, 00:07:13.513 "w_mbytes_per_sec": 0 00:07:13.513 }, 00:07:13.513 "claimed": true, 00:07:13.513 "claim_type": "exclusive_write", 00:07:13.513 "zoned": false, 00:07:13.513 "supported_io_types": { 00:07:13.513 "read": true, 00:07:13.513 "write": true, 00:07:13.513 "unmap": true, 00:07:13.513 "flush": true, 00:07:13.513 "reset": true, 00:07:13.513 "nvme_admin": false, 00:07:13.513 "nvme_io": false, 00:07:13.513 "nvme_io_md": false, 00:07:13.513 "write_zeroes": true, 00:07:13.513 "zcopy": true, 00:07:13.513 "get_zone_info": false, 00:07:13.513 "zone_management": false, 00:07:13.513 "zone_append": false, 00:07:13.513 "compare": false, 00:07:13.513 "compare_and_write": false, 00:07:13.513 "abort": true, 00:07:13.513 "seek_hole": false, 00:07:13.513 "seek_data": false, 00:07:13.513 "copy": true, 00:07:13.513 "nvme_iov_md": false 00:07:13.513 }, 00:07:13.513 "memory_domains": [ 00:07:13.513 { 00:07:13.513 "dma_device_id": "system", 00:07:13.513 "dma_device_type": 1 00:07:13.513 }, 00:07:13.513 { 00:07:13.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.513 "dma_device_type": 2 00:07:13.513 } 00:07:13.513 ], 00:07:13.513 "driver_specific": {} 00:07:13.513 } 00:07:13.513 ] 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:13.513 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.514 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.514 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:13.514 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.514 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:13.514 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.514 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.514 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.514 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.514 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.514 14:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.514 14:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.514 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.514 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.514 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.514 "name": "Existed_Raid", 00:07:13.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.514 "strip_size_kb": 64, 00:07:13.514 "state": "configuring", 00:07:13.514 "raid_level": "concat", 00:07:13.514 "superblock": false, 00:07:13.514 "num_base_bdevs": 3, 00:07:13.514 "num_base_bdevs_discovered": 1, 00:07:13.514 "num_base_bdevs_operational": 3, 00:07:13.514 "base_bdevs_list": [ 00:07:13.514 { 00:07:13.514 "name": "BaseBdev1", 00:07:13.514 "uuid": "5e5ba9ad-bab6-49f6-8857-9bea6a63fba0", 00:07:13.514 "is_configured": true, 00:07:13.514 "data_offset": 0, 00:07:13.514 "data_size": 65536 00:07:13.514 }, 00:07:13.514 { 00:07:13.514 "name": "BaseBdev2", 00:07:13.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.514 "is_configured": false, 00:07:13.514 "data_offset": 0, 00:07:13.514 "data_size": 0 00:07:13.514 }, 00:07:13.514 { 00:07:13.514 "name": "BaseBdev3", 00:07:13.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.514 "is_configured": false, 00:07:13.514 "data_offset": 0, 00:07:13.514 "data_size": 0 00:07:13.514 } 00:07:13.514 ] 00:07:13.514 }' 00:07:13.514 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.514 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.774 [2024-10-01 14:32:05.298894] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:13.774 [2024-10-01 14:32:05.298948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.774 [2024-10-01 14:32:05.306938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.774 [2024-10-01 14:32:05.308792] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.774 [2024-10-01 14:32:05.308830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.774 [2024-10-01 14:32:05.308839] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:13.774 [2024-10-01 14:32:05.308849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.774 "name": "Existed_Raid", 00:07:13.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.774 "strip_size_kb": 64, 00:07:13.774 "state": "configuring", 00:07:13.774 "raid_level": "concat", 00:07:13.774 "superblock": false, 00:07:13.774 "num_base_bdevs": 3, 00:07:13.774 "num_base_bdevs_discovered": 1, 00:07:13.774 "num_base_bdevs_operational": 3, 00:07:13.774 "base_bdevs_list": [ 00:07:13.774 { 00:07:13.774 "name": "BaseBdev1", 00:07:13.774 "uuid": "5e5ba9ad-bab6-49f6-8857-9bea6a63fba0", 00:07:13.774 "is_configured": true, 00:07:13.774 "data_offset": 0, 00:07:13.774 "data_size": 65536 00:07:13.774 }, 00:07:13.774 { 00:07:13.774 "name": "BaseBdev2", 00:07:13.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.774 "is_configured": false, 00:07:13.774 "data_offset": 0, 00:07:13.774 "data_size": 0 00:07:13.774 }, 00:07:13.774 { 00:07:13.774 "name": "BaseBdev3", 00:07:13.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.774 "is_configured": false, 00:07:13.774 "data_offset": 0, 00:07:13.774 "data_size": 0 00:07:13.774 } 00:07:13.774 ] 00:07:13.774 }' 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.774 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.035 [2024-10-01 14:32:05.681669] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:14.035 BaseBdev2 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.035 [ 00:07:14.035 { 00:07:14.035 "name": "BaseBdev2", 00:07:14.035 "aliases": [ 00:07:14.035 "ffa9dd70-9a00-495d-ae34-48aff301db7f" 00:07:14.035 ], 00:07:14.035 "product_name": "Malloc disk", 00:07:14.035 "block_size": 512, 00:07:14.035 "num_blocks": 65536, 00:07:14.035 "uuid": "ffa9dd70-9a00-495d-ae34-48aff301db7f", 00:07:14.035 "assigned_rate_limits": { 00:07:14.035 "rw_ios_per_sec": 0, 00:07:14.035 "rw_mbytes_per_sec": 0, 00:07:14.035 "r_mbytes_per_sec": 0, 00:07:14.035 "w_mbytes_per_sec": 0 00:07:14.035 }, 00:07:14.035 "claimed": true, 00:07:14.035 "claim_type": "exclusive_write", 00:07:14.035 "zoned": false, 00:07:14.035 "supported_io_types": { 00:07:14.035 "read": true, 00:07:14.035 "write": true, 00:07:14.035 "unmap": true, 00:07:14.035 "flush": true, 00:07:14.035 "reset": true, 00:07:14.035 "nvme_admin": false, 00:07:14.035 "nvme_io": false, 00:07:14.035 "nvme_io_md": false, 00:07:14.035 "write_zeroes": true, 00:07:14.035 "zcopy": true, 00:07:14.035 "get_zone_info": false, 00:07:14.035 "zone_management": false, 00:07:14.035 "zone_append": false, 00:07:14.035 "compare": false, 00:07:14.035 "compare_and_write": false, 00:07:14.035 "abort": true, 00:07:14.035 "seek_hole": false, 00:07:14.035 "seek_data": false, 00:07:14.035 "copy": true, 00:07:14.035 "nvme_iov_md": false 00:07:14.035 }, 00:07:14.035 "memory_domains": [ 00:07:14.035 { 00:07:14.035 "dma_device_id": "system", 00:07:14.035 "dma_device_type": 1 00:07:14.035 }, 00:07:14.035 { 00:07:14.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.035 "dma_device_type": 2 00:07:14.035 } 00:07:14.035 ], 00:07:14.035 "driver_specific": {} 00:07:14.035 } 00:07:14.035 ] 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.035 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.298 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.298 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.298 "name": "Existed_Raid", 00:07:14.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.298 "strip_size_kb": 64, 00:07:14.298 "state": "configuring", 00:07:14.298 "raid_level": "concat", 00:07:14.298 "superblock": false, 00:07:14.298 "num_base_bdevs": 3, 00:07:14.298 "num_base_bdevs_discovered": 2, 00:07:14.298 "num_base_bdevs_operational": 3, 00:07:14.298 "base_bdevs_list": [ 00:07:14.298 { 00:07:14.298 "name": "BaseBdev1", 00:07:14.298 "uuid": "5e5ba9ad-bab6-49f6-8857-9bea6a63fba0", 00:07:14.298 "is_configured": true, 00:07:14.298 "data_offset": 0, 00:07:14.298 "data_size": 65536 00:07:14.298 }, 00:07:14.298 { 00:07:14.298 "name": "BaseBdev2", 00:07:14.298 "uuid": "ffa9dd70-9a00-495d-ae34-48aff301db7f", 00:07:14.298 "is_configured": true, 00:07:14.298 "data_offset": 0, 00:07:14.298 "data_size": 65536 00:07:14.298 }, 00:07:14.298 { 00:07:14.298 "name": "BaseBdev3", 00:07:14.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.298 "is_configured": false, 00:07:14.298 "data_offset": 0, 00:07:14.298 "data_size": 0 00:07:14.298 } 00:07:14.298 ] 00:07:14.298 }' 00:07:14.298 14:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.298 14:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.559 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:14.559 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.559 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.559 [2024-10-01 14:32:06.072671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:14.559 [2024-10-01 14:32:06.072740] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:14.559 [2024-10-01 14:32:06.072753] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:14.559 [2024-10-01 14:32:06.073000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:14.559 [2024-10-01 14:32:06.073136] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:14.559 [2024-10-01 14:32:06.073146] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:14.559 [2024-10-01 14:32:06.073372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.559 BaseBdev3 00:07:14.559 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.559 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:14.559 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:14.559 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:14.559 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:14.559 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.560 [ 00:07:14.560 { 00:07:14.560 "name": "BaseBdev3", 00:07:14.560 "aliases": [ 00:07:14.560 "c5f30fe3-49d8-4154-b4cc-2abe1802205c" 00:07:14.560 ], 00:07:14.560 "product_name": "Malloc disk", 00:07:14.560 "block_size": 512, 00:07:14.560 "num_blocks": 65536, 00:07:14.560 "uuid": "c5f30fe3-49d8-4154-b4cc-2abe1802205c", 00:07:14.560 "assigned_rate_limits": { 00:07:14.560 "rw_ios_per_sec": 0, 00:07:14.560 "rw_mbytes_per_sec": 0, 00:07:14.560 "r_mbytes_per_sec": 0, 00:07:14.560 "w_mbytes_per_sec": 0 00:07:14.560 }, 00:07:14.560 "claimed": true, 00:07:14.560 "claim_type": "exclusive_write", 00:07:14.560 "zoned": false, 00:07:14.560 "supported_io_types": { 00:07:14.560 "read": true, 00:07:14.560 "write": true, 00:07:14.560 "unmap": true, 00:07:14.560 "flush": true, 00:07:14.560 "reset": true, 00:07:14.560 "nvme_admin": false, 00:07:14.560 "nvme_io": false, 00:07:14.560 "nvme_io_md": false, 00:07:14.560 "write_zeroes": true, 00:07:14.560 "zcopy": true, 00:07:14.560 "get_zone_info": false, 00:07:14.560 "zone_management": false, 00:07:14.560 "zone_append": false, 00:07:14.560 "compare": false, 00:07:14.560 "compare_and_write": false, 00:07:14.560 "abort": true, 00:07:14.560 "seek_hole": false, 00:07:14.560 "seek_data": false, 00:07:14.560 "copy": true, 00:07:14.560 "nvme_iov_md": false 00:07:14.560 }, 00:07:14.560 "memory_domains": [ 00:07:14.560 { 00:07:14.560 "dma_device_id": "system", 00:07:14.560 "dma_device_type": 1 00:07:14.560 }, 00:07:14.560 { 00:07:14.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.560 "dma_device_type": 2 00:07:14.560 } 00:07:14.560 ], 00:07:14.560 "driver_specific": {} 00:07:14.560 } 00:07:14.560 ] 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.560 "name": "Existed_Raid", 00:07:14.560 "uuid": "4cd7f573-713f-4d1d-9236-f6f0f7d9e90d", 00:07:14.560 "strip_size_kb": 64, 00:07:14.560 "state": "online", 00:07:14.560 "raid_level": "concat", 00:07:14.560 "superblock": false, 00:07:14.560 "num_base_bdevs": 3, 00:07:14.560 "num_base_bdevs_discovered": 3, 00:07:14.560 "num_base_bdevs_operational": 3, 00:07:14.560 "base_bdevs_list": [ 00:07:14.560 { 00:07:14.560 "name": "BaseBdev1", 00:07:14.560 "uuid": "5e5ba9ad-bab6-49f6-8857-9bea6a63fba0", 00:07:14.560 "is_configured": true, 00:07:14.560 "data_offset": 0, 00:07:14.560 "data_size": 65536 00:07:14.560 }, 00:07:14.560 { 00:07:14.560 "name": "BaseBdev2", 00:07:14.560 "uuid": "ffa9dd70-9a00-495d-ae34-48aff301db7f", 00:07:14.560 "is_configured": true, 00:07:14.560 "data_offset": 0, 00:07:14.560 "data_size": 65536 00:07:14.560 }, 00:07:14.560 { 00:07:14.560 "name": "BaseBdev3", 00:07:14.560 "uuid": "c5f30fe3-49d8-4154-b4cc-2abe1802205c", 00:07:14.560 "is_configured": true, 00:07:14.560 "data_offset": 0, 00:07:14.560 "data_size": 65536 00:07:14.560 } 00:07:14.560 ] 00:07:14.560 }' 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.560 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.819 [2024-10-01 14:32:06.421159] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:14.819 "name": "Existed_Raid", 00:07:14.819 "aliases": [ 00:07:14.819 "4cd7f573-713f-4d1d-9236-f6f0f7d9e90d" 00:07:14.819 ], 00:07:14.819 "product_name": "Raid Volume", 00:07:14.819 "block_size": 512, 00:07:14.819 "num_blocks": 196608, 00:07:14.819 "uuid": "4cd7f573-713f-4d1d-9236-f6f0f7d9e90d", 00:07:14.819 "assigned_rate_limits": { 00:07:14.819 "rw_ios_per_sec": 0, 00:07:14.819 "rw_mbytes_per_sec": 0, 00:07:14.819 "r_mbytes_per_sec": 0, 00:07:14.819 "w_mbytes_per_sec": 0 00:07:14.819 }, 00:07:14.819 "claimed": false, 00:07:14.819 "zoned": false, 00:07:14.819 "supported_io_types": { 00:07:14.819 "read": true, 00:07:14.819 "write": true, 00:07:14.819 "unmap": true, 00:07:14.819 "flush": true, 00:07:14.819 "reset": true, 00:07:14.819 "nvme_admin": false, 00:07:14.819 "nvme_io": false, 00:07:14.819 "nvme_io_md": false, 00:07:14.819 "write_zeroes": true, 00:07:14.819 "zcopy": false, 00:07:14.819 "get_zone_info": false, 00:07:14.819 "zone_management": false, 00:07:14.819 "zone_append": false, 00:07:14.819 "compare": false, 00:07:14.819 "compare_and_write": false, 00:07:14.819 "abort": false, 00:07:14.819 "seek_hole": false, 00:07:14.819 "seek_data": false, 00:07:14.819 "copy": false, 00:07:14.819 "nvme_iov_md": false 00:07:14.819 }, 00:07:14.819 "memory_domains": [ 00:07:14.819 { 00:07:14.819 "dma_device_id": "system", 00:07:14.819 "dma_device_type": 1 00:07:14.819 }, 00:07:14.819 { 00:07:14.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.819 "dma_device_type": 2 00:07:14.819 }, 00:07:14.819 { 00:07:14.819 "dma_device_id": "system", 00:07:14.819 "dma_device_type": 1 00:07:14.819 }, 00:07:14.819 { 00:07:14.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.819 "dma_device_type": 2 00:07:14.819 }, 00:07:14.819 { 00:07:14.819 "dma_device_id": "system", 00:07:14.819 "dma_device_type": 1 00:07:14.819 }, 00:07:14.819 { 00:07:14.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.819 "dma_device_type": 2 00:07:14.819 } 00:07:14.819 ], 00:07:14.819 "driver_specific": { 00:07:14.819 "raid": { 00:07:14.819 "uuid": "4cd7f573-713f-4d1d-9236-f6f0f7d9e90d", 00:07:14.819 "strip_size_kb": 64, 00:07:14.819 "state": "online", 00:07:14.819 "raid_level": "concat", 00:07:14.819 "superblock": false, 00:07:14.819 "num_base_bdevs": 3, 00:07:14.819 "num_base_bdevs_discovered": 3, 00:07:14.819 "num_base_bdevs_operational": 3, 00:07:14.819 "base_bdevs_list": [ 00:07:14.819 { 00:07:14.819 "name": "BaseBdev1", 00:07:14.819 "uuid": "5e5ba9ad-bab6-49f6-8857-9bea6a63fba0", 00:07:14.819 "is_configured": true, 00:07:14.819 "data_offset": 0, 00:07:14.819 "data_size": 65536 00:07:14.819 }, 00:07:14.819 { 00:07:14.819 "name": "BaseBdev2", 00:07:14.819 "uuid": "ffa9dd70-9a00-495d-ae34-48aff301db7f", 00:07:14.819 "is_configured": true, 00:07:14.819 "data_offset": 0, 00:07:14.819 "data_size": 65536 00:07:14.819 }, 00:07:14.819 { 00:07:14.819 "name": "BaseBdev3", 00:07:14.819 "uuid": "c5f30fe3-49d8-4154-b4cc-2abe1802205c", 00:07:14.819 "is_configured": true, 00:07:14.819 "data_offset": 0, 00:07:14.819 "data_size": 65536 00:07:14.819 } 00:07:14.819 ] 00:07:14.819 } 00:07:14.819 } 00:07:14.819 }' 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:14.819 BaseBdev2 00:07:14.819 BaseBdev3' 00:07:14.819 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.078 [2024-10-01 14:32:06.596892] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:15.078 [2024-10-01 14:32:06.596921] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.078 [2024-10-01 14:32:06.596975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.078 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.079 "name": "Existed_Raid", 00:07:15.079 "uuid": "4cd7f573-713f-4d1d-9236-f6f0f7d9e90d", 00:07:15.079 "strip_size_kb": 64, 00:07:15.079 "state": "offline", 00:07:15.079 "raid_level": "concat", 00:07:15.079 "superblock": false, 00:07:15.079 "num_base_bdevs": 3, 00:07:15.079 "num_base_bdevs_discovered": 2, 00:07:15.079 "num_base_bdevs_operational": 2, 00:07:15.079 "base_bdevs_list": [ 00:07:15.079 { 00:07:15.079 "name": null, 00:07:15.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.079 "is_configured": false, 00:07:15.079 "data_offset": 0, 00:07:15.079 "data_size": 65536 00:07:15.079 }, 00:07:15.079 { 00:07:15.079 "name": "BaseBdev2", 00:07:15.079 "uuid": "ffa9dd70-9a00-495d-ae34-48aff301db7f", 00:07:15.079 "is_configured": true, 00:07:15.079 "data_offset": 0, 00:07:15.079 "data_size": 65536 00:07:15.079 }, 00:07:15.079 { 00:07:15.079 "name": "BaseBdev3", 00:07:15.079 "uuid": "c5f30fe3-49d8-4154-b4cc-2abe1802205c", 00:07:15.079 "is_configured": true, 00:07:15.079 "data_offset": 0, 00:07:15.079 "data_size": 65536 00:07:15.079 } 00:07:15.079 ] 00:07:15.079 }' 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.079 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.337 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:15.337 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:15.337 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:15.337 14:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.337 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.337 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.337 14:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.337 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:15.337 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:15.337 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:15.337 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.337 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.337 [2024-10-01 14:32:07.008604] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.596 [2024-10-01 14:32:07.108548] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:15.596 [2024-10-01 14:32:07.108600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.596 BaseBdev2 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.596 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.596 [ 00:07:15.596 { 00:07:15.596 "name": "BaseBdev2", 00:07:15.596 "aliases": [ 00:07:15.596 "ba4d3f62-74aa-4b38-b473-01070791e04c" 00:07:15.596 ], 00:07:15.596 "product_name": "Malloc disk", 00:07:15.596 "block_size": 512, 00:07:15.596 "num_blocks": 65536, 00:07:15.596 "uuid": "ba4d3f62-74aa-4b38-b473-01070791e04c", 00:07:15.596 "assigned_rate_limits": { 00:07:15.596 "rw_ios_per_sec": 0, 00:07:15.596 "rw_mbytes_per_sec": 0, 00:07:15.596 "r_mbytes_per_sec": 0, 00:07:15.596 "w_mbytes_per_sec": 0 00:07:15.596 }, 00:07:15.596 "claimed": false, 00:07:15.596 "zoned": false, 00:07:15.596 "supported_io_types": { 00:07:15.596 "read": true, 00:07:15.596 "write": true, 00:07:15.596 "unmap": true, 00:07:15.596 "flush": true, 00:07:15.596 "reset": true, 00:07:15.596 "nvme_admin": false, 00:07:15.596 "nvme_io": false, 00:07:15.596 "nvme_io_md": false, 00:07:15.596 "write_zeroes": true, 00:07:15.596 "zcopy": true, 00:07:15.596 "get_zone_info": false, 00:07:15.596 "zone_management": false, 00:07:15.596 "zone_append": false, 00:07:15.596 "compare": false, 00:07:15.596 "compare_and_write": false, 00:07:15.596 "abort": true, 00:07:15.596 "seek_hole": false, 00:07:15.596 "seek_data": false, 00:07:15.596 "copy": true, 00:07:15.596 "nvme_iov_md": false 00:07:15.596 }, 00:07:15.596 "memory_domains": [ 00:07:15.596 { 00:07:15.596 "dma_device_id": "system", 00:07:15.596 "dma_device_type": 1 00:07:15.596 }, 00:07:15.596 { 00:07:15.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.596 "dma_device_type": 2 00:07:15.596 } 00:07:15.596 ], 00:07:15.596 "driver_specific": {} 00:07:15.597 } 00:07:15.597 ] 00:07:15.597 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.597 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:15.597 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:15.597 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:15.597 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:15.597 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.597 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.857 BaseBdev3 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.857 [ 00:07:15.857 { 00:07:15.857 "name": "BaseBdev3", 00:07:15.857 "aliases": [ 00:07:15.857 "a82c5c4f-629f-4079-b417-11c3ca9e4af4" 00:07:15.857 ], 00:07:15.857 "product_name": "Malloc disk", 00:07:15.857 "block_size": 512, 00:07:15.857 "num_blocks": 65536, 00:07:15.857 "uuid": "a82c5c4f-629f-4079-b417-11c3ca9e4af4", 00:07:15.857 "assigned_rate_limits": { 00:07:15.857 "rw_ios_per_sec": 0, 00:07:15.857 "rw_mbytes_per_sec": 0, 00:07:15.857 "r_mbytes_per_sec": 0, 00:07:15.857 "w_mbytes_per_sec": 0 00:07:15.857 }, 00:07:15.857 "claimed": false, 00:07:15.857 "zoned": false, 00:07:15.857 "supported_io_types": { 00:07:15.857 "read": true, 00:07:15.857 "write": true, 00:07:15.857 "unmap": true, 00:07:15.857 "flush": true, 00:07:15.857 "reset": true, 00:07:15.857 "nvme_admin": false, 00:07:15.857 "nvme_io": false, 00:07:15.857 "nvme_io_md": false, 00:07:15.857 "write_zeroes": true, 00:07:15.857 "zcopy": true, 00:07:15.857 "get_zone_info": false, 00:07:15.857 "zone_management": false, 00:07:15.857 "zone_append": false, 00:07:15.857 "compare": false, 00:07:15.857 "compare_and_write": false, 00:07:15.857 "abort": true, 00:07:15.857 "seek_hole": false, 00:07:15.857 "seek_data": false, 00:07:15.857 "copy": true, 00:07:15.857 "nvme_iov_md": false 00:07:15.857 }, 00:07:15.857 "memory_domains": [ 00:07:15.857 { 00:07:15.857 "dma_device_id": "system", 00:07:15.857 "dma_device_type": 1 00:07:15.857 }, 00:07:15.857 { 00:07:15.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.857 "dma_device_type": 2 00:07:15.857 } 00:07:15.857 ], 00:07:15.857 "driver_specific": {} 00:07:15.857 } 00:07:15.857 ] 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.857 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.857 [2024-10-01 14:32:07.326201] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:15.857 [2024-10-01 14:32:07.326256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:15.857 [2024-10-01 14:32:07.326278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.857 [2024-10-01 14:32:07.328164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.858 "name": "Existed_Raid", 00:07:15.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.858 "strip_size_kb": 64, 00:07:15.858 "state": "configuring", 00:07:15.858 "raid_level": "concat", 00:07:15.858 "superblock": false, 00:07:15.858 "num_base_bdevs": 3, 00:07:15.858 "num_base_bdevs_discovered": 2, 00:07:15.858 "num_base_bdevs_operational": 3, 00:07:15.858 "base_bdevs_list": [ 00:07:15.858 { 00:07:15.858 "name": "BaseBdev1", 00:07:15.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.858 "is_configured": false, 00:07:15.858 "data_offset": 0, 00:07:15.858 "data_size": 0 00:07:15.858 }, 00:07:15.858 { 00:07:15.858 "name": "BaseBdev2", 00:07:15.858 "uuid": "ba4d3f62-74aa-4b38-b473-01070791e04c", 00:07:15.858 "is_configured": true, 00:07:15.858 "data_offset": 0, 00:07:15.858 "data_size": 65536 00:07:15.858 }, 00:07:15.858 { 00:07:15.858 "name": "BaseBdev3", 00:07:15.858 "uuid": "a82c5c4f-629f-4079-b417-11c3ca9e4af4", 00:07:15.858 "is_configured": true, 00:07:15.858 "data_offset": 0, 00:07:15.858 "data_size": 65536 00:07:15.858 } 00:07:15.858 ] 00:07:15.858 }' 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.858 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.116 [2024-10-01 14:32:07.670233] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.116 "name": "Existed_Raid", 00:07:16.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.116 "strip_size_kb": 64, 00:07:16.116 "state": "configuring", 00:07:16.116 "raid_level": "concat", 00:07:16.116 "superblock": false, 00:07:16.116 "num_base_bdevs": 3, 00:07:16.116 "num_base_bdevs_discovered": 1, 00:07:16.116 "num_base_bdevs_operational": 3, 00:07:16.116 "base_bdevs_list": [ 00:07:16.116 { 00:07:16.116 "name": "BaseBdev1", 00:07:16.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.116 "is_configured": false, 00:07:16.116 "data_offset": 0, 00:07:16.116 "data_size": 0 00:07:16.116 }, 00:07:16.116 { 00:07:16.116 "name": null, 00:07:16.116 "uuid": "ba4d3f62-74aa-4b38-b473-01070791e04c", 00:07:16.116 "is_configured": false, 00:07:16.116 "data_offset": 0, 00:07:16.116 "data_size": 65536 00:07:16.116 }, 00:07:16.116 { 00:07:16.116 "name": "BaseBdev3", 00:07:16.116 "uuid": "a82c5c4f-629f-4079-b417-11c3ca9e4af4", 00:07:16.116 "is_configured": true, 00:07:16.116 "data_offset": 0, 00:07:16.116 "data_size": 65536 00:07:16.116 } 00:07:16.116 ] 00:07:16.116 }' 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.116 14:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.376 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.376 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.376 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.376 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:16.376 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.376 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:16.376 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:16.376 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.376 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.651 [2024-10-01 14:32:08.065322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:16.651 BaseBdev1 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.651 [ 00:07:16.651 { 00:07:16.651 "name": "BaseBdev1", 00:07:16.651 "aliases": [ 00:07:16.651 "b448eaa6-c51f-4ccc-9a7c-41caa80c5610" 00:07:16.651 ], 00:07:16.651 "product_name": "Malloc disk", 00:07:16.651 "block_size": 512, 00:07:16.651 "num_blocks": 65536, 00:07:16.651 "uuid": "b448eaa6-c51f-4ccc-9a7c-41caa80c5610", 00:07:16.651 "assigned_rate_limits": { 00:07:16.651 "rw_ios_per_sec": 0, 00:07:16.651 "rw_mbytes_per_sec": 0, 00:07:16.651 "r_mbytes_per_sec": 0, 00:07:16.651 "w_mbytes_per_sec": 0 00:07:16.651 }, 00:07:16.651 "claimed": true, 00:07:16.651 "claim_type": "exclusive_write", 00:07:16.651 "zoned": false, 00:07:16.651 "supported_io_types": { 00:07:16.651 "read": true, 00:07:16.651 "write": true, 00:07:16.651 "unmap": true, 00:07:16.651 "flush": true, 00:07:16.651 "reset": true, 00:07:16.651 "nvme_admin": false, 00:07:16.651 "nvme_io": false, 00:07:16.651 "nvme_io_md": false, 00:07:16.651 "write_zeroes": true, 00:07:16.651 "zcopy": true, 00:07:16.651 "get_zone_info": false, 00:07:16.651 "zone_management": false, 00:07:16.651 "zone_append": false, 00:07:16.651 "compare": false, 00:07:16.651 "compare_and_write": false, 00:07:16.651 "abort": true, 00:07:16.651 "seek_hole": false, 00:07:16.651 "seek_data": false, 00:07:16.651 "copy": true, 00:07:16.651 "nvme_iov_md": false 00:07:16.651 }, 00:07:16.651 "memory_domains": [ 00:07:16.651 { 00:07:16.651 "dma_device_id": "system", 00:07:16.651 "dma_device_type": 1 00:07:16.651 }, 00:07:16.651 { 00:07:16.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.651 "dma_device_type": 2 00:07:16.651 } 00:07:16.651 ], 00:07:16.651 "driver_specific": {} 00:07:16.651 } 00:07:16.651 ] 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:16.651 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.652 "name": "Existed_Raid", 00:07:16.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.652 "strip_size_kb": 64, 00:07:16.652 "state": "configuring", 00:07:16.652 "raid_level": "concat", 00:07:16.652 "superblock": false, 00:07:16.652 "num_base_bdevs": 3, 00:07:16.652 "num_base_bdevs_discovered": 2, 00:07:16.652 "num_base_bdevs_operational": 3, 00:07:16.652 "base_bdevs_list": [ 00:07:16.652 { 00:07:16.652 "name": "BaseBdev1", 00:07:16.652 "uuid": "b448eaa6-c51f-4ccc-9a7c-41caa80c5610", 00:07:16.652 "is_configured": true, 00:07:16.652 "data_offset": 0, 00:07:16.652 "data_size": 65536 00:07:16.652 }, 00:07:16.652 { 00:07:16.652 "name": null, 00:07:16.652 "uuid": "ba4d3f62-74aa-4b38-b473-01070791e04c", 00:07:16.652 "is_configured": false, 00:07:16.652 "data_offset": 0, 00:07:16.652 "data_size": 65536 00:07:16.652 }, 00:07:16.652 { 00:07:16.652 "name": "BaseBdev3", 00:07:16.652 "uuid": "a82c5c4f-629f-4079-b417-11c3ca9e4af4", 00:07:16.652 "is_configured": true, 00:07:16.652 "data_offset": 0, 00:07:16.652 "data_size": 65536 00:07:16.652 } 00:07:16.652 ] 00:07:16.652 }' 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.652 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.912 [2024-10-01 14:32:08.437487] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.912 "name": "Existed_Raid", 00:07:16.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.912 "strip_size_kb": 64, 00:07:16.912 "state": "configuring", 00:07:16.912 "raid_level": "concat", 00:07:16.912 "superblock": false, 00:07:16.912 "num_base_bdevs": 3, 00:07:16.912 "num_base_bdevs_discovered": 1, 00:07:16.912 "num_base_bdevs_operational": 3, 00:07:16.912 "base_bdevs_list": [ 00:07:16.912 { 00:07:16.912 "name": "BaseBdev1", 00:07:16.912 "uuid": "b448eaa6-c51f-4ccc-9a7c-41caa80c5610", 00:07:16.912 "is_configured": true, 00:07:16.912 "data_offset": 0, 00:07:16.912 "data_size": 65536 00:07:16.912 }, 00:07:16.912 { 00:07:16.912 "name": null, 00:07:16.912 "uuid": "ba4d3f62-74aa-4b38-b473-01070791e04c", 00:07:16.912 "is_configured": false, 00:07:16.912 "data_offset": 0, 00:07:16.912 "data_size": 65536 00:07:16.912 }, 00:07:16.912 { 00:07:16.912 "name": null, 00:07:16.912 "uuid": "a82c5c4f-629f-4079-b417-11c3ca9e4af4", 00:07:16.912 "is_configured": false, 00:07:16.912 "data_offset": 0, 00:07:16.912 "data_size": 65536 00:07:16.912 } 00:07:16.912 ] 00:07:16.912 }' 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.912 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.173 [2024-10-01 14:32:08.793580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.173 "name": "Existed_Raid", 00:07:17.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.173 "strip_size_kb": 64, 00:07:17.173 "state": "configuring", 00:07:17.173 "raid_level": "concat", 00:07:17.173 "superblock": false, 00:07:17.173 "num_base_bdevs": 3, 00:07:17.173 "num_base_bdevs_discovered": 2, 00:07:17.173 "num_base_bdevs_operational": 3, 00:07:17.173 "base_bdevs_list": [ 00:07:17.173 { 00:07:17.173 "name": "BaseBdev1", 00:07:17.173 "uuid": "b448eaa6-c51f-4ccc-9a7c-41caa80c5610", 00:07:17.173 "is_configured": true, 00:07:17.173 "data_offset": 0, 00:07:17.173 "data_size": 65536 00:07:17.173 }, 00:07:17.173 { 00:07:17.173 "name": null, 00:07:17.173 "uuid": "ba4d3f62-74aa-4b38-b473-01070791e04c", 00:07:17.173 "is_configured": false, 00:07:17.173 "data_offset": 0, 00:07:17.173 "data_size": 65536 00:07:17.173 }, 00:07:17.173 { 00:07:17.173 "name": "BaseBdev3", 00:07:17.173 "uuid": "a82c5c4f-629f-4079-b417-11c3ca9e4af4", 00:07:17.173 "is_configured": true, 00:07:17.173 "data_offset": 0, 00:07:17.173 "data_size": 65536 00:07:17.173 } 00:07:17.173 ] 00:07:17.173 }' 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.173 14:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.432 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.691 [2024-10-01 14:32:09.161689] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.691 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.692 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.692 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.692 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.692 "name": "Existed_Raid", 00:07:17.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.692 "strip_size_kb": 64, 00:07:17.692 "state": "configuring", 00:07:17.692 "raid_level": "concat", 00:07:17.692 "superblock": false, 00:07:17.692 "num_base_bdevs": 3, 00:07:17.692 "num_base_bdevs_discovered": 1, 00:07:17.692 "num_base_bdevs_operational": 3, 00:07:17.692 "base_bdevs_list": [ 00:07:17.692 { 00:07:17.692 "name": null, 00:07:17.692 "uuid": "b448eaa6-c51f-4ccc-9a7c-41caa80c5610", 00:07:17.692 "is_configured": false, 00:07:17.692 "data_offset": 0, 00:07:17.692 "data_size": 65536 00:07:17.692 }, 00:07:17.692 { 00:07:17.692 "name": null, 00:07:17.692 "uuid": "ba4d3f62-74aa-4b38-b473-01070791e04c", 00:07:17.692 "is_configured": false, 00:07:17.692 "data_offset": 0, 00:07:17.692 "data_size": 65536 00:07:17.692 }, 00:07:17.692 { 00:07:17.692 "name": "BaseBdev3", 00:07:17.692 "uuid": "a82c5c4f-629f-4079-b417-11c3ca9e4af4", 00:07:17.692 "is_configured": true, 00:07:17.692 "data_offset": 0, 00:07:17.692 "data_size": 65536 00:07:17.692 } 00:07:17.692 ] 00:07:17.692 }' 00:07:17.692 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.692 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 [2024-10-01 14:32:09.589433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.951 "name": "Existed_Raid", 00:07:17.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.951 "strip_size_kb": 64, 00:07:17.951 "state": "configuring", 00:07:17.951 "raid_level": "concat", 00:07:17.951 "superblock": false, 00:07:17.951 "num_base_bdevs": 3, 00:07:17.951 "num_base_bdevs_discovered": 2, 00:07:17.951 "num_base_bdevs_operational": 3, 00:07:17.951 "base_bdevs_list": [ 00:07:17.951 { 00:07:17.951 "name": null, 00:07:17.951 "uuid": "b448eaa6-c51f-4ccc-9a7c-41caa80c5610", 00:07:17.951 "is_configured": false, 00:07:17.951 "data_offset": 0, 00:07:17.951 "data_size": 65536 00:07:17.951 }, 00:07:17.951 { 00:07:17.951 "name": "BaseBdev2", 00:07:17.951 "uuid": "ba4d3f62-74aa-4b38-b473-01070791e04c", 00:07:17.951 "is_configured": true, 00:07:17.951 "data_offset": 0, 00:07:17.951 "data_size": 65536 00:07:17.951 }, 00:07:17.951 { 00:07:17.951 "name": "BaseBdev3", 00:07:17.951 "uuid": "a82c5c4f-629f-4079-b417-11c3ca9e4af4", 00:07:17.951 "is_configured": true, 00:07:17.951 "data_offset": 0, 00:07:17.951 "data_size": 65536 00:07:17.951 } 00:07:17.951 ] 00:07:17.951 }' 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.951 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b448eaa6-c51f-4ccc-9a7c-41caa80c5610 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.524 [2024-10-01 14:32:09.995921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:18.524 [2024-10-01 14:32:09.995967] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:18.524 [2024-10-01 14:32:09.995976] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:18.524 [2024-10-01 14:32:09.996218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:18.524 [2024-10-01 14:32:09.996345] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:18.524 [2024-10-01 14:32:09.996359] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:18.524 [2024-10-01 14:32:09.996574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.524 NewBaseBdev 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.524 14:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.524 [ 00:07:18.524 { 00:07:18.524 "name": "NewBaseBdev", 00:07:18.524 "aliases": [ 00:07:18.524 "b448eaa6-c51f-4ccc-9a7c-41caa80c5610" 00:07:18.524 ], 00:07:18.524 "product_name": "Malloc disk", 00:07:18.524 "block_size": 512, 00:07:18.524 "num_blocks": 65536, 00:07:18.524 "uuid": "b448eaa6-c51f-4ccc-9a7c-41caa80c5610", 00:07:18.524 "assigned_rate_limits": { 00:07:18.524 "rw_ios_per_sec": 0, 00:07:18.524 "rw_mbytes_per_sec": 0, 00:07:18.524 "r_mbytes_per_sec": 0, 00:07:18.524 "w_mbytes_per_sec": 0 00:07:18.524 }, 00:07:18.524 "claimed": true, 00:07:18.524 "claim_type": "exclusive_write", 00:07:18.524 "zoned": false, 00:07:18.524 "supported_io_types": { 00:07:18.524 "read": true, 00:07:18.524 "write": true, 00:07:18.524 "unmap": true, 00:07:18.524 "flush": true, 00:07:18.524 "reset": true, 00:07:18.524 "nvme_admin": false, 00:07:18.524 "nvme_io": false, 00:07:18.524 "nvme_io_md": false, 00:07:18.524 "write_zeroes": true, 00:07:18.524 "zcopy": true, 00:07:18.524 "get_zone_info": false, 00:07:18.524 "zone_management": false, 00:07:18.524 "zone_append": false, 00:07:18.524 "compare": false, 00:07:18.524 "compare_and_write": false, 00:07:18.524 "abort": true, 00:07:18.524 "seek_hole": false, 00:07:18.524 "seek_data": false, 00:07:18.524 "copy": true, 00:07:18.524 "nvme_iov_md": false 00:07:18.524 }, 00:07:18.524 "memory_domains": [ 00:07:18.524 { 00:07:18.524 "dma_device_id": "system", 00:07:18.524 "dma_device_type": 1 00:07:18.524 }, 00:07:18.524 { 00:07:18.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.524 "dma_device_type": 2 00:07:18.524 } 00:07:18.524 ], 00:07:18.524 "driver_specific": {} 00:07:18.524 } 00:07:18.524 ] 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.524 "name": "Existed_Raid", 00:07:18.524 "uuid": "00b13e5d-1288-421b-8dfa-c538ba1e0084", 00:07:18.524 "strip_size_kb": 64, 00:07:18.524 "state": "online", 00:07:18.524 "raid_level": "concat", 00:07:18.524 "superblock": false, 00:07:18.524 "num_base_bdevs": 3, 00:07:18.524 "num_base_bdevs_discovered": 3, 00:07:18.524 "num_base_bdevs_operational": 3, 00:07:18.524 "base_bdevs_list": [ 00:07:18.524 { 00:07:18.524 "name": "NewBaseBdev", 00:07:18.524 "uuid": "b448eaa6-c51f-4ccc-9a7c-41caa80c5610", 00:07:18.524 "is_configured": true, 00:07:18.524 "data_offset": 0, 00:07:18.524 "data_size": 65536 00:07:18.524 }, 00:07:18.524 { 00:07:18.524 "name": "BaseBdev2", 00:07:18.524 "uuid": "ba4d3f62-74aa-4b38-b473-01070791e04c", 00:07:18.524 "is_configured": true, 00:07:18.524 "data_offset": 0, 00:07:18.524 "data_size": 65536 00:07:18.524 }, 00:07:18.524 { 00:07:18.524 "name": "BaseBdev3", 00:07:18.524 "uuid": "a82c5c4f-629f-4079-b417-11c3ca9e4af4", 00:07:18.524 "is_configured": true, 00:07:18.524 "data_offset": 0, 00:07:18.524 "data_size": 65536 00:07:18.524 } 00:07:18.524 ] 00:07:18.524 }' 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.524 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.786 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:18.786 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:18.786 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:18.786 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:18.786 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:18.786 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:18.786 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:18.786 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:18.786 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.787 [2024-10-01 14:32:10.328381] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:18.787 "name": "Existed_Raid", 00:07:18.787 "aliases": [ 00:07:18.787 "00b13e5d-1288-421b-8dfa-c538ba1e0084" 00:07:18.787 ], 00:07:18.787 "product_name": "Raid Volume", 00:07:18.787 "block_size": 512, 00:07:18.787 "num_blocks": 196608, 00:07:18.787 "uuid": "00b13e5d-1288-421b-8dfa-c538ba1e0084", 00:07:18.787 "assigned_rate_limits": { 00:07:18.787 "rw_ios_per_sec": 0, 00:07:18.787 "rw_mbytes_per_sec": 0, 00:07:18.787 "r_mbytes_per_sec": 0, 00:07:18.787 "w_mbytes_per_sec": 0 00:07:18.787 }, 00:07:18.787 "claimed": false, 00:07:18.787 "zoned": false, 00:07:18.787 "supported_io_types": { 00:07:18.787 "read": true, 00:07:18.787 "write": true, 00:07:18.787 "unmap": true, 00:07:18.787 "flush": true, 00:07:18.787 "reset": true, 00:07:18.787 "nvme_admin": false, 00:07:18.787 "nvme_io": false, 00:07:18.787 "nvme_io_md": false, 00:07:18.787 "write_zeroes": true, 00:07:18.787 "zcopy": false, 00:07:18.787 "get_zone_info": false, 00:07:18.787 "zone_management": false, 00:07:18.787 "zone_append": false, 00:07:18.787 "compare": false, 00:07:18.787 "compare_and_write": false, 00:07:18.787 "abort": false, 00:07:18.787 "seek_hole": false, 00:07:18.787 "seek_data": false, 00:07:18.787 "copy": false, 00:07:18.787 "nvme_iov_md": false 00:07:18.787 }, 00:07:18.787 "memory_domains": [ 00:07:18.787 { 00:07:18.787 "dma_device_id": "system", 00:07:18.787 "dma_device_type": 1 00:07:18.787 }, 00:07:18.787 { 00:07:18.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.787 "dma_device_type": 2 00:07:18.787 }, 00:07:18.787 { 00:07:18.787 "dma_device_id": "system", 00:07:18.787 "dma_device_type": 1 00:07:18.787 }, 00:07:18.787 { 00:07:18.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.787 "dma_device_type": 2 00:07:18.787 }, 00:07:18.787 { 00:07:18.787 "dma_device_id": "system", 00:07:18.787 "dma_device_type": 1 00:07:18.787 }, 00:07:18.787 { 00:07:18.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.787 "dma_device_type": 2 00:07:18.787 } 00:07:18.787 ], 00:07:18.787 "driver_specific": { 00:07:18.787 "raid": { 00:07:18.787 "uuid": "00b13e5d-1288-421b-8dfa-c538ba1e0084", 00:07:18.787 "strip_size_kb": 64, 00:07:18.787 "state": "online", 00:07:18.787 "raid_level": "concat", 00:07:18.787 "superblock": false, 00:07:18.787 "num_base_bdevs": 3, 00:07:18.787 "num_base_bdevs_discovered": 3, 00:07:18.787 "num_base_bdevs_operational": 3, 00:07:18.787 "base_bdevs_list": [ 00:07:18.787 { 00:07:18.787 "name": "NewBaseBdev", 00:07:18.787 "uuid": "b448eaa6-c51f-4ccc-9a7c-41caa80c5610", 00:07:18.787 "is_configured": true, 00:07:18.787 "data_offset": 0, 00:07:18.787 "data_size": 65536 00:07:18.787 }, 00:07:18.787 { 00:07:18.787 "name": "BaseBdev2", 00:07:18.787 "uuid": "ba4d3f62-74aa-4b38-b473-01070791e04c", 00:07:18.787 "is_configured": true, 00:07:18.787 "data_offset": 0, 00:07:18.787 "data_size": 65536 00:07:18.787 }, 00:07:18.787 { 00:07:18.787 "name": "BaseBdev3", 00:07:18.787 "uuid": "a82c5c4f-629f-4079-b417-11c3ca9e4af4", 00:07:18.787 "is_configured": true, 00:07:18.787 "data_offset": 0, 00:07:18.787 "data_size": 65536 00:07:18.787 } 00:07:18.787 ] 00:07:18.787 } 00:07:18.787 } 00:07:18.787 }' 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:18.787 BaseBdev2 00:07:18.787 BaseBdev3' 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.787 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.048 [2024-10-01 14:32:10.544087] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.048 [2024-10-01 14:32:10.544115] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.048 [2024-10-01 14:32:10.544183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.048 [2024-10-01 14:32:10.544236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.048 [2024-10-01 14:32:10.544247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64270 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 64270 ']' 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 64270 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64270 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.048 killing process with pid 64270 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64270' 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 64270 00:07:19.048 [2024-10-01 14:32:10.578179] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.048 14:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 64270 00:07:19.310 [2024-10-01 14:32:10.763233] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:20.253 14:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:20.253 00:07:20.253 real 0m8.059s 00:07:20.253 user 0m12.818s 00:07:20.253 sys 0m1.235s 00:07:20.253 14:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.253 ************************************ 00:07:20.253 END TEST raid_state_function_test 00:07:20.253 ************************************ 00:07:20.253 14:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.253 14:32:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:07:20.253 14:32:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:20.254 14:32:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.254 14:32:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.254 ************************************ 00:07:20.254 START TEST raid_state_function_test_sb 00:07:20.254 ************************************ 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:20.254 Process raid pid: 64869 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64869 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64869' 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64869 00:07:20.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64869 ']' 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.254 14:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.254 [2024-10-01 14:32:11.716828] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:20.254 [2024-10-01 14:32:11.716947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.254 [2024-10-01 14:32:11.859639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.515 [2024-10-01 14:32:12.054681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.776 [2024-10-01 14:32:12.199867] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.776 [2024-10-01 14:32:12.199902] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.036 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.036 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:21.036 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:21.036 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.036 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.037 [2024-10-01 14:32:12.581018] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.037 [2024-10-01 14:32:12.581068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.037 [2024-10-01 14:32:12.581080] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.037 [2024-10-01 14:32:12.581090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.037 [2024-10-01 14:32:12.581096] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:21.037 [2024-10-01 14:32:12.581106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.037 "name": "Existed_Raid", 00:07:21.037 "uuid": "2a7f9b87-1e45-4108-b562-77c039219dd3", 00:07:21.037 "strip_size_kb": 64, 00:07:21.037 "state": "configuring", 00:07:21.037 "raid_level": "concat", 00:07:21.037 "superblock": true, 00:07:21.037 "num_base_bdevs": 3, 00:07:21.037 "num_base_bdevs_discovered": 0, 00:07:21.037 "num_base_bdevs_operational": 3, 00:07:21.037 "base_bdevs_list": [ 00:07:21.037 { 00:07:21.037 "name": "BaseBdev1", 00:07:21.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.037 "is_configured": false, 00:07:21.037 "data_offset": 0, 00:07:21.037 "data_size": 0 00:07:21.037 }, 00:07:21.037 { 00:07:21.037 "name": "BaseBdev2", 00:07:21.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.037 "is_configured": false, 00:07:21.037 "data_offset": 0, 00:07:21.037 "data_size": 0 00:07:21.037 }, 00:07:21.037 { 00:07:21.037 "name": "BaseBdev3", 00:07:21.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.037 "is_configured": false, 00:07:21.037 "data_offset": 0, 00:07:21.037 "data_size": 0 00:07:21.037 } 00:07:21.037 ] 00:07:21.037 }' 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.037 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.298 [2024-10-01 14:32:12.909010] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.298 [2024-10-01 14:32:12.909049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.298 [2024-10-01 14:32:12.917034] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.298 [2024-10-01 14:32:12.917070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.298 [2024-10-01 14:32:12.917079] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.298 [2024-10-01 14:32:12.917089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.298 [2024-10-01 14:32:12.917095] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:21.298 [2024-10-01 14:32:12.917105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.298 [2024-10-01 14:32:12.959351] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.298 BaseBdev1 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.298 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.298 [ 00:07:21.298 { 00:07:21.298 "name": "BaseBdev1", 00:07:21.298 "aliases": [ 00:07:21.298 "e03a8b95-969b-4688-a1d0-77efaf64ba7a" 00:07:21.298 ], 00:07:21.298 "product_name": "Malloc disk", 00:07:21.298 "block_size": 512, 00:07:21.298 "num_blocks": 65536, 00:07:21.298 "uuid": "e03a8b95-969b-4688-a1d0-77efaf64ba7a", 00:07:21.298 "assigned_rate_limits": { 00:07:21.298 "rw_ios_per_sec": 0, 00:07:21.298 "rw_mbytes_per_sec": 0, 00:07:21.298 "r_mbytes_per_sec": 0, 00:07:21.298 "w_mbytes_per_sec": 0 00:07:21.298 }, 00:07:21.298 "claimed": true, 00:07:21.298 "claim_type": "exclusive_write", 00:07:21.298 "zoned": false, 00:07:21.298 "supported_io_types": { 00:07:21.298 "read": true, 00:07:21.298 "write": true, 00:07:21.298 "unmap": true, 00:07:21.298 "flush": true, 00:07:21.298 "reset": true, 00:07:21.298 "nvme_admin": false, 00:07:21.298 "nvme_io": false, 00:07:21.298 "nvme_io_md": false, 00:07:21.298 "write_zeroes": true, 00:07:21.298 "zcopy": true, 00:07:21.298 "get_zone_info": false, 00:07:21.298 "zone_management": false, 00:07:21.298 "zone_append": false, 00:07:21.298 "compare": false, 00:07:21.298 "compare_and_write": false, 00:07:21.298 "abort": true, 00:07:21.298 "seek_hole": false, 00:07:21.298 "seek_data": false, 00:07:21.298 "copy": true, 00:07:21.298 "nvme_iov_md": false 00:07:21.298 }, 00:07:21.299 "memory_domains": [ 00:07:21.299 { 00:07:21.299 "dma_device_id": "system", 00:07:21.299 "dma_device_type": 1 00:07:21.299 }, 00:07:21.299 { 00:07:21.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.558 "dma_device_type": 2 00:07:21.558 } 00:07:21.558 ], 00:07:21.558 "driver_specific": {} 00:07:21.558 } 00:07:21.558 ] 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.558 14:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.558 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.558 "name": "Existed_Raid", 00:07:21.558 "uuid": "e0cb1e03-f512-4df3-bb31-006a36045110", 00:07:21.558 "strip_size_kb": 64, 00:07:21.558 "state": "configuring", 00:07:21.558 "raid_level": "concat", 00:07:21.558 "superblock": true, 00:07:21.558 "num_base_bdevs": 3, 00:07:21.558 "num_base_bdevs_discovered": 1, 00:07:21.558 "num_base_bdevs_operational": 3, 00:07:21.558 "base_bdevs_list": [ 00:07:21.558 { 00:07:21.558 "name": "BaseBdev1", 00:07:21.558 "uuid": "e03a8b95-969b-4688-a1d0-77efaf64ba7a", 00:07:21.558 "is_configured": true, 00:07:21.558 "data_offset": 2048, 00:07:21.558 "data_size": 63488 00:07:21.558 }, 00:07:21.558 { 00:07:21.558 "name": "BaseBdev2", 00:07:21.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.558 "is_configured": false, 00:07:21.558 "data_offset": 0, 00:07:21.558 "data_size": 0 00:07:21.558 }, 00:07:21.558 { 00:07:21.558 "name": "BaseBdev3", 00:07:21.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.558 "is_configured": false, 00:07:21.558 "data_offset": 0, 00:07:21.558 "data_size": 0 00:07:21.558 } 00:07:21.558 ] 00:07:21.558 }' 00:07:21.558 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.558 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.817 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:21.817 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.817 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.817 [2024-10-01 14:32:13.307458] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.817 [2024-10-01 14:32:13.307509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:21.817 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.817 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:21.817 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.817 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.817 [2024-10-01 14:32:13.315499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.817 [2024-10-01 14:32:13.317455] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.817 [2024-10-01 14:32:13.317573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.817 [2024-10-01 14:32:13.317630] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:21.817 [2024-10-01 14:32:13.317660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:21.817 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.817 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:21.817 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:21.817 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.818 "name": "Existed_Raid", 00:07:21.818 "uuid": "21c8d13d-3d7f-45cc-9ef6-7a2bbc2a2d1e", 00:07:21.818 "strip_size_kb": 64, 00:07:21.818 "state": "configuring", 00:07:21.818 "raid_level": "concat", 00:07:21.818 "superblock": true, 00:07:21.818 "num_base_bdevs": 3, 00:07:21.818 "num_base_bdevs_discovered": 1, 00:07:21.818 "num_base_bdevs_operational": 3, 00:07:21.818 "base_bdevs_list": [ 00:07:21.818 { 00:07:21.818 "name": "BaseBdev1", 00:07:21.818 "uuid": "e03a8b95-969b-4688-a1d0-77efaf64ba7a", 00:07:21.818 "is_configured": true, 00:07:21.818 "data_offset": 2048, 00:07:21.818 "data_size": 63488 00:07:21.818 }, 00:07:21.818 { 00:07:21.818 "name": "BaseBdev2", 00:07:21.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.818 "is_configured": false, 00:07:21.818 "data_offset": 0, 00:07:21.818 "data_size": 0 00:07:21.818 }, 00:07:21.818 { 00:07:21.818 "name": "BaseBdev3", 00:07:21.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.818 "is_configured": false, 00:07:21.818 "data_offset": 0, 00:07:21.818 "data_size": 0 00:07:21.818 } 00:07:21.818 ] 00:07:21.818 }' 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.818 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.078 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.079 [2024-10-01 14:32:13.658771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:22.079 BaseBdev2 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.079 [ 00:07:22.079 { 00:07:22.079 "name": "BaseBdev2", 00:07:22.079 "aliases": [ 00:07:22.079 "de9666ea-0269-4107-a837-214a442eb614" 00:07:22.079 ], 00:07:22.079 "product_name": "Malloc disk", 00:07:22.079 "block_size": 512, 00:07:22.079 "num_blocks": 65536, 00:07:22.079 "uuid": "de9666ea-0269-4107-a837-214a442eb614", 00:07:22.079 "assigned_rate_limits": { 00:07:22.079 "rw_ios_per_sec": 0, 00:07:22.079 "rw_mbytes_per_sec": 0, 00:07:22.079 "r_mbytes_per_sec": 0, 00:07:22.079 "w_mbytes_per_sec": 0 00:07:22.079 }, 00:07:22.079 "claimed": true, 00:07:22.079 "claim_type": "exclusive_write", 00:07:22.079 "zoned": false, 00:07:22.079 "supported_io_types": { 00:07:22.079 "read": true, 00:07:22.079 "write": true, 00:07:22.079 "unmap": true, 00:07:22.079 "flush": true, 00:07:22.079 "reset": true, 00:07:22.079 "nvme_admin": false, 00:07:22.079 "nvme_io": false, 00:07:22.079 "nvme_io_md": false, 00:07:22.079 "write_zeroes": true, 00:07:22.079 "zcopy": true, 00:07:22.079 "get_zone_info": false, 00:07:22.079 "zone_management": false, 00:07:22.079 "zone_append": false, 00:07:22.079 "compare": false, 00:07:22.079 "compare_and_write": false, 00:07:22.079 "abort": true, 00:07:22.079 "seek_hole": false, 00:07:22.079 "seek_data": false, 00:07:22.079 "copy": true, 00:07:22.079 "nvme_iov_md": false 00:07:22.079 }, 00:07:22.079 "memory_domains": [ 00:07:22.079 { 00:07:22.079 "dma_device_id": "system", 00:07:22.079 "dma_device_type": 1 00:07:22.079 }, 00:07:22.079 { 00:07:22.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.079 "dma_device_type": 2 00:07:22.079 } 00:07:22.079 ], 00:07:22.079 "driver_specific": {} 00:07:22.079 } 00:07:22.079 ] 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.079 "name": "Existed_Raid", 00:07:22.079 "uuid": "21c8d13d-3d7f-45cc-9ef6-7a2bbc2a2d1e", 00:07:22.079 "strip_size_kb": 64, 00:07:22.079 "state": "configuring", 00:07:22.079 "raid_level": "concat", 00:07:22.079 "superblock": true, 00:07:22.079 "num_base_bdevs": 3, 00:07:22.079 "num_base_bdevs_discovered": 2, 00:07:22.079 "num_base_bdevs_operational": 3, 00:07:22.079 "base_bdevs_list": [ 00:07:22.079 { 00:07:22.079 "name": "BaseBdev1", 00:07:22.079 "uuid": "e03a8b95-969b-4688-a1d0-77efaf64ba7a", 00:07:22.079 "is_configured": true, 00:07:22.079 "data_offset": 2048, 00:07:22.079 "data_size": 63488 00:07:22.079 }, 00:07:22.079 { 00:07:22.079 "name": "BaseBdev2", 00:07:22.079 "uuid": "de9666ea-0269-4107-a837-214a442eb614", 00:07:22.079 "is_configured": true, 00:07:22.079 "data_offset": 2048, 00:07:22.079 "data_size": 63488 00:07:22.079 }, 00:07:22.079 { 00:07:22.079 "name": "BaseBdev3", 00:07:22.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.079 "is_configured": false, 00:07:22.079 "data_offset": 0, 00:07:22.079 "data_size": 0 00:07:22.079 } 00:07:22.079 ] 00:07:22.079 }' 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.079 14:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.340 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:22.340 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.340 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 [2024-10-01 14:32:14.037473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:22.600 [2024-10-01 14:32:14.037830] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:22.600 [2024-10-01 14:32:14.037877] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:22.600 BaseBdev3 00:07:22.600 [2024-10-01 14:32:14.038191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:22.600 [2024-10-01 14:32:14.038330] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:22.600 [2024-10-01 14:32:14.038340] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:22.600 [2024-10-01 14:32:14.038467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.600 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.600 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:22.600 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:22.600 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:22.600 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:22.600 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:22.600 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:22.600 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:22.600 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.600 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.601 [ 00:07:22.601 { 00:07:22.601 "name": "BaseBdev3", 00:07:22.601 "aliases": [ 00:07:22.601 "8fb0f9f1-ca8e-4ed7-9b51-f278e5d3e719" 00:07:22.601 ], 00:07:22.601 "product_name": "Malloc disk", 00:07:22.601 "block_size": 512, 00:07:22.601 "num_blocks": 65536, 00:07:22.601 "uuid": "8fb0f9f1-ca8e-4ed7-9b51-f278e5d3e719", 00:07:22.601 "assigned_rate_limits": { 00:07:22.601 "rw_ios_per_sec": 0, 00:07:22.601 "rw_mbytes_per_sec": 0, 00:07:22.601 "r_mbytes_per_sec": 0, 00:07:22.601 "w_mbytes_per_sec": 0 00:07:22.601 }, 00:07:22.601 "claimed": true, 00:07:22.601 "claim_type": "exclusive_write", 00:07:22.601 "zoned": false, 00:07:22.601 "supported_io_types": { 00:07:22.601 "read": true, 00:07:22.601 "write": true, 00:07:22.601 "unmap": true, 00:07:22.601 "flush": true, 00:07:22.601 "reset": true, 00:07:22.601 "nvme_admin": false, 00:07:22.601 "nvme_io": false, 00:07:22.601 "nvme_io_md": false, 00:07:22.601 "write_zeroes": true, 00:07:22.601 "zcopy": true, 00:07:22.601 "get_zone_info": false, 00:07:22.601 "zone_management": false, 00:07:22.601 "zone_append": false, 00:07:22.601 "compare": false, 00:07:22.601 "compare_and_write": false, 00:07:22.601 "abort": true, 00:07:22.601 "seek_hole": false, 00:07:22.601 "seek_data": false, 00:07:22.601 "copy": true, 00:07:22.601 "nvme_iov_md": false 00:07:22.601 }, 00:07:22.601 "memory_domains": [ 00:07:22.601 { 00:07:22.601 "dma_device_id": "system", 00:07:22.601 "dma_device_type": 1 00:07:22.601 }, 00:07:22.601 { 00:07:22.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.601 "dma_device_type": 2 00:07:22.601 } 00:07:22.601 ], 00:07:22.601 "driver_specific": {} 00:07:22.601 } 00:07:22.601 ] 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.601 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.601 "name": "Existed_Raid", 00:07:22.601 "uuid": "21c8d13d-3d7f-45cc-9ef6-7a2bbc2a2d1e", 00:07:22.601 "strip_size_kb": 64, 00:07:22.601 "state": "online", 00:07:22.601 "raid_level": "concat", 00:07:22.601 "superblock": true, 00:07:22.601 "num_base_bdevs": 3, 00:07:22.601 "num_base_bdevs_discovered": 3, 00:07:22.601 "num_base_bdevs_operational": 3, 00:07:22.601 "base_bdevs_list": [ 00:07:22.601 { 00:07:22.601 "name": "BaseBdev1", 00:07:22.601 "uuid": "e03a8b95-969b-4688-a1d0-77efaf64ba7a", 00:07:22.601 "is_configured": true, 00:07:22.602 "data_offset": 2048, 00:07:22.602 "data_size": 63488 00:07:22.602 }, 00:07:22.602 { 00:07:22.602 "name": "BaseBdev2", 00:07:22.602 "uuid": "de9666ea-0269-4107-a837-214a442eb614", 00:07:22.602 "is_configured": true, 00:07:22.602 "data_offset": 2048, 00:07:22.602 "data_size": 63488 00:07:22.602 }, 00:07:22.602 { 00:07:22.602 "name": "BaseBdev3", 00:07:22.602 "uuid": "8fb0f9f1-ca8e-4ed7-9b51-f278e5d3e719", 00:07:22.602 "is_configured": true, 00:07:22.602 "data_offset": 2048, 00:07:22.602 "data_size": 63488 00:07:22.602 } 00:07:22.602 ] 00:07:22.602 }' 00:07:22.602 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.602 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.861 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:22.861 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:22.861 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:22.861 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:22.861 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:22.861 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:22.861 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:22.861 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:22.861 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.861 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.861 [2024-10-01 14:32:14.389958] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.861 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.861 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:22.861 "name": "Existed_Raid", 00:07:22.861 "aliases": [ 00:07:22.861 "21c8d13d-3d7f-45cc-9ef6-7a2bbc2a2d1e" 00:07:22.861 ], 00:07:22.861 "product_name": "Raid Volume", 00:07:22.861 "block_size": 512, 00:07:22.861 "num_blocks": 190464, 00:07:22.861 "uuid": "21c8d13d-3d7f-45cc-9ef6-7a2bbc2a2d1e", 00:07:22.861 "assigned_rate_limits": { 00:07:22.861 "rw_ios_per_sec": 0, 00:07:22.861 "rw_mbytes_per_sec": 0, 00:07:22.861 "r_mbytes_per_sec": 0, 00:07:22.861 "w_mbytes_per_sec": 0 00:07:22.861 }, 00:07:22.861 "claimed": false, 00:07:22.861 "zoned": false, 00:07:22.861 "supported_io_types": { 00:07:22.861 "read": true, 00:07:22.861 "write": true, 00:07:22.861 "unmap": true, 00:07:22.861 "flush": true, 00:07:22.861 "reset": true, 00:07:22.861 "nvme_admin": false, 00:07:22.861 "nvme_io": false, 00:07:22.861 "nvme_io_md": false, 00:07:22.861 "write_zeroes": true, 00:07:22.861 "zcopy": false, 00:07:22.861 "get_zone_info": false, 00:07:22.861 "zone_management": false, 00:07:22.861 "zone_append": false, 00:07:22.861 "compare": false, 00:07:22.861 "compare_and_write": false, 00:07:22.861 "abort": false, 00:07:22.861 "seek_hole": false, 00:07:22.861 "seek_data": false, 00:07:22.861 "copy": false, 00:07:22.861 "nvme_iov_md": false 00:07:22.861 }, 00:07:22.861 "memory_domains": [ 00:07:22.861 { 00:07:22.862 "dma_device_id": "system", 00:07:22.862 "dma_device_type": 1 00:07:22.862 }, 00:07:22.862 { 00:07:22.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.862 "dma_device_type": 2 00:07:22.862 }, 00:07:22.862 { 00:07:22.862 "dma_device_id": "system", 00:07:22.862 "dma_device_type": 1 00:07:22.862 }, 00:07:22.862 { 00:07:22.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.862 "dma_device_type": 2 00:07:22.862 }, 00:07:22.862 { 00:07:22.862 "dma_device_id": "system", 00:07:22.862 "dma_device_type": 1 00:07:22.862 }, 00:07:22.862 { 00:07:22.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.862 "dma_device_type": 2 00:07:22.862 } 00:07:22.862 ], 00:07:22.862 "driver_specific": { 00:07:22.862 "raid": { 00:07:22.862 "uuid": "21c8d13d-3d7f-45cc-9ef6-7a2bbc2a2d1e", 00:07:22.862 "strip_size_kb": 64, 00:07:22.862 "state": "online", 00:07:22.862 "raid_level": "concat", 00:07:22.862 "superblock": true, 00:07:22.862 "num_base_bdevs": 3, 00:07:22.862 "num_base_bdevs_discovered": 3, 00:07:22.862 "num_base_bdevs_operational": 3, 00:07:22.862 "base_bdevs_list": [ 00:07:22.862 { 00:07:22.862 "name": "BaseBdev1", 00:07:22.862 "uuid": "e03a8b95-969b-4688-a1d0-77efaf64ba7a", 00:07:22.862 "is_configured": true, 00:07:22.862 "data_offset": 2048, 00:07:22.862 "data_size": 63488 00:07:22.862 }, 00:07:22.862 { 00:07:22.862 "name": "BaseBdev2", 00:07:22.862 "uuid": "de9666ea-0269-4107-a837-214a442eb614", 00:07:22.862 "is_configured": true, 00:07:22.862 "data_offset": 2048, 00:07:22.862 "data_size": 63488 00:07:22.862 }, 00:07:22.862 { 00:07:22.862 "name": "BaseBdev3", 00:07:22.862 "uuid": "8fb0f9f1-ca8e-4ed7-9b51-f278e5d3e719", 00:07:22.862 "is_configured": true, 00:07:22.862 "data_offset": 2048, 00:07:22.862 "data_size": 63488 00:07:22.862 } 00:07:22.862 ] 00:07:22.862 } 00:07:22.862 } 00:07:22.862 }' 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:22.862 BaseBdev2 00:07:22.862 BaseBdev3' 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.862 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.124 [2024-10-01 14:32:14.577678] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:23.124 [2024-10-01 14:32:14.577722] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.124 [2024-10-01 14:32:14.577771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.124 "name": "Existed_Raid", 00:07:23.124 "uuid": "21c8d13d-3d7f-45cc-9ef6-7a2bbc2a2d1e", 00:07:23.124 "strip_size_kb": 64, 00:07:23.124 "state": "offline", 00:07:23.124 "raid_level": "concat", 00:07:23.124 "superblock": true, 00:07:23.124 "num_base_bdevs": 3, 00:07:23.124 "num_base_bdevs_discovered": 2, 00:07:23.124 "num_base_bdevs_operational": 2, 00:07:23.124 "base_bdevs_list": [ 00:07:23.124 { 00:07:23.124 "name": null, 00:07:23.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.124 "is_configured": false, 00:07:23.124 "data_offset": 0, 00:07:23.124 "data_size": 63488 00:07:23.124 }, 00:07:23.124 { 00:07:23.124 "name": "BaseBdev2", 00:07:23.124 "uuid": "de9666ea-0269-4107-a837-214a442eb614", 00:07:23.124 "is_configured": true, 00:07:23.124 "data_offset": 2048, 00:07:23.124 "data_size": 63488 00:07:23.124 }, 00:07:23.124 { 00:07:23.124 "name": "BaseBdev3", 00:07:23.124 "uuid": "8fb0f9f1-ca8e-4ed7-9b51-f278e5d3e719", 00:07:23.124 "is_configured": true, 00:07:23.124 "data_offset": 2048, 00:07:23.124 "data_size": 63488 00:07:23.124 } 00:07:23.124 ] 00:07:23.124 }' 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.124 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.384 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:23.384 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:23.384 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.384 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:23.384 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.384 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.384 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.384 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:23.384 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:23.384 14:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:23.384 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.384 14:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.384 [2024-10-01 14:32:14.983568] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:23.384 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.384 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:23.384 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:23.384 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:23.384 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.384 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.384 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.384 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.645 [2024-10-01 14:32:15.081469] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:23.645 [2024-10-01 14:32:15.081514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.645 BaseBdev2 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.645 [ 00:07:23.645 { 00:07:23.645 "name": "BaseBdev2", 00:07:23.645 "aliases": [ 00:07:23.645 "85f7e651-3033-4123-afbd-d6a2bea0a966" 00:07:23.645 ], 00:07:23.645 "product_name": "Malloc disk", 00:07:23.645 "block_size": 512, 00:07:23.645 "num_blocks": 65536, 00:07:23.645 "uuid": "85f7e651-3033-4123-afbd-d6a2bea0a966", 00:07:23.645 "assigned_rate_limits": { 00:07:23.645 "rw_ios_per_sec": 0, 00:07:23.645 "rw_mbytes_per_sec": 0, 00:07:23.645 "r_mbytes_per_sec": 0, 00:07:23.645 "w_mbytes_per_sec": 0 00:07:23.645 }, 00:07:23.645 "claimed": false, 00:07:23.645 "zoned": false, 00:07:23.645 "supported_io_types": { 00:07:23.645 "read": true, 00:07:23.645 "write": true, 00:07:23.645 "unmap": true, 00:07:23.645 "flush": true, 00:07:23.645 "reset": true, 00:07:23.645 "nvme_admin": false, 00:07:23.645 "nvme_io": false, 00:07:23.645 "nvme_io_md": false, 00:07:23.645 "write_zeroes": true, 00:07:23.645 "zcopy": true, 00:07:23.645 "get_zone_info": false, 00:07:23.645 "zone_management": false, 00:07:23.645 "zone_append": false, 00:07:23.645 "compare": false, 00:07:23.645 "compare_and_write": false, 00:07:23.645 "abort": true, 00:07:23.645 "seek_hole": false, 00:07:23.645 "seek_data": false, 00:07:23.645 "copy": true, 00:07:23.645 "nvme_iov_md": false 00:07:23.645 }, 00:07:23.645 "memory_domains": [ 00:07:23.645 { 00:07:23.645 "dma_device_id": "system", 00:07:23.645 "dma_device_type": 1 00:07:23.645 }, 00:07:23.645 { 00:07:23.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.645 "dma_device_type": 2 00:07:23.645 } 00:07:23.645 ], 00:07:23.645 "driver_specific": {} 00:07:23.645 } 00:07:23.645 ] 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:23.645 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.646 BaseBdev3 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.646 [ 00:07:23.646 { 00:07:23.646 "name": "BaseBdev3", 00:07:23.646 "aliases": [ 00:07:23.646 "c5b07487-a8ed-4cdb-812b-61107f83d27e" 00:07:23.646 ], 00:07:23.646 "product_name": "Malloc disk", 00:07:23.646 "block_size": 512, 00:07:23.646 "num_blocks": 65536, 00:07:23.646 "uuid": "c5b07487-a8ed-4cdb-812b-61107f83d27e", 00:07:23.646 "assigned_rate_limits": { 00:07:23.646 "rw_ios_per_sec": 0, 00:07:23.646 "rw_mbytes_per_sec": 0, 00:07:23.646 "r_mbytes_per_sec": 0, 00:07:23.646 "w_mbytes_per_sec": 0 00:07:23.646 }, 00:07:23.646 "claimed": false, 00:07:23.646 "zoned": false, 00:07:23.646 "supported_io_types": { 00:07:23.646 "read": true, 00:07:23.646 "write": true, 00:07:23.646 "unmap": true, 00:07:23.646 "flush": true, 00:07:23.646 "reset": true, 00:07:23.646 "nvme_admin": false, 00:07:23.646 "nvme_io": false, 00:07:23.646 "nvme_io_md": false, 00:07:23.646 "write_zeroes": true, 00:07:23.646 "zcopy": true, 00:07:23.646 "get_zone_info": false, 00:07:23.646 "zone_management": false, 00:07:23.646 "zone_append": false, 00:07:23.646 "compare": false, 00:07:23.646 "compare_and_write": false, 00:07:23.646 "abort": true, 00:07:23.646 "seek_hole": false, 00:07:23.646 "seek_data": false, 00:07:23.646 "copy": true, 00:07:23.646 "nvme_iov_md": false 00:07:23.646 }, 00:07:23.646 "memory_domains": [ 00:07:23.646 { 00:07:23.646 "dma_device_id": "system", 00:07:23.646 "dma_device_type": 1 00:07:23.646 }, 00:07:23.646 { 00:07:23.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.646 "dma_device_type": 2 00:07:23.646 } 00:07:23.646 ], 00:07:23.646 "driver_specific": {} 00:07:23.646 } 00:07:23.646 ] 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.646 [2024-10-01 14:32:15.296248] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:23.646 [2024-10-01 14:32:15.296392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:23.646 [2024-10-01 14:32:15.296459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.646 [2024-10-01 14:32:15.298324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.646 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.906 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.906 "name": "Existed_Raid", 00:07:23.906 "uuid": "0ad5c56c-d7ba-49ce-875c-8db324f08f6b", 00:07:23.906 "strip_size_kb": 64, 00:07:23.906 "state": "configuring", 00:07:23.906 "raid_level": "concat", 00:07:23.906 "superblock": true, 00:07:23.906 "num_base_bdevs": 3, 00:07:23.906 "num_base_bdevs_discovered": 2, 00:07:23.906 "num_base_bdevs_operational": 3, 00:07:23.906 "base_bdevs_list": [ 00:07:23.906 { 00:07:23.906 "name": "BaseBdev1", 00:07:23.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.906 "is_configured": false, 00:07:23.906 "data_offset": 0, 00:07:23.906 "data_size": 0 00:07:23.906 }, 00:07:23.906 { 00:07:23.906 "name": "BaseBdev2", 00:07:23.906 "uuid": "85f7e651-3033-4123-afbd-d6a2bea0a966", 00:07:23.906 "is_configured": true, 00:07:23.906 "data_offset": 2048, 00:07:23.906 "data_size": 63488 00:07:23.906 }, 00:07:23.906 { 00:07:23.906 "name": "BaseBdev3", 00:07:23.906 "uuid": "c5b07487-a8ed-4cdb-812b-61107f83d27e", 00:07:23.906 "is_configured": true, 00:07:23.906 "data_offset": 2048, 00:07:23.906 "data_size": 63488 00:07:23.906 } 00:07:23.906 ] 00:07:23.906 }' 00:07:23.906 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.906 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.165 [2024-10-01 14:32:15.616273] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.165 "name": "Existed_Raid", 00:07:24.165 "uuid": "0ad5c56c-d7ba-49ce-875c-8db324f08f6b", 00:07:24.165 "strip_size_kb": 64, 00:07:24.165 "state": "configuring", 00:07:24.165 "raid_level": "concat", 00:07:24.165 "superblock": true, 00:07:24.165 "num_base_bdevs": 3, 00:07:24.165 "num_base_bdevs_discovered": 1, 00:07:24.165 "num_base_bdevs_operational": 3, 00:07:24.165 "base_bdevs_list": [ 00:07:24.165 { 00:07:24.165 "name": "BaseBdev1", 00:07:24.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.165 "is_configured": false, 00:07:24.165 "data_offset": 0, 00:07:24.165 "data_size": 0 00:07:24.165 }, 00:07:24.165 { 00:07:24.165 "name": null, 00:07:24.165 "uuid": "85f7e651-3033-4123-afbd-d6a2bea0a966", 00:07:24.165 "is_configured": false, 00:07:24.165 "data_offset": 0, 00:07:24.165 "data_size": 63488 00:07:24.165 }, 00:07:24.165 { 00:07:24.165 "name": "BaseBdev3", 00:07:24.165 "uuid": "c5b07487-a8ed-4cdb-812b-61107f83d27e", 00:07:24.165 "is_configured": true, 00:07:24.165 "data_offset": 2048, 00:07:24.165 "data_size": 63488 00:07:24.165 } 00:07:24.165 ] 00:07:24.165 }' 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.165 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.426 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.426 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:24.426 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.426 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.426 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.426 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:24.426 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:24.426 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.426 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.426 [2024-10-01 14:32:15.998627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.426 BaseBdev1 00:07:24.426 14:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.426 14:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:24.426 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:24.426 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:24.426 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:24.426 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:24.426 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:24.426 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:24.426 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.426 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.426 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.426 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:24.426 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.426 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.426 [ 00:07:24.426 { 00:07:24.426 "name": "BaseBdev1", 00:07:24.426 "aliases": [ 00:07:24.426 "d8b5d736-d81f-4836-b0f2-9323a4be5a08" 00:07:24.426 ], 00:07:24.426 "product_name": "Malloc disk", 00:07:24.426 "block_size": 512, 00:07:24.426 "num_blocks": 65536, 00:07:24.426 "uuid": "d8b5d736-d81f-4836-b0f2-9323a4be5a08", 00:07:24.426 "assigned_rate_limits": { 00:07:24.426 "rw_ios_per_sec": 0, 00:07:24.426 "rw_mbytes_per_sec": 0, 00:07:24.426 "r_mbytes_per_sec": 0, 00:07:24.426 "w_mbytes_per_sec": 0 00:07:24.426 }, 00:07:24.426 "claimed": true, 00:07:24.426 "claim_type": "exclusive_write", 00:07:24.426 "zoned": false, 00:07:24.426 "supported_io_types": { 00:07:24.426 "read": true, 00:07:24.426 "write": true, 00:07:24.426 "unmap": true, 00:07:24.426 "flush": true, 00:07:24.426 "reset": true, 00:07:24.426 "nvme_admin": false, 00:07:24.426 "nvme_io": false, 00:07:24.426 "nvme_io_md": false, 00:07:24.426 "write_zeroes": true, 00:07:24.427 "zcopy": true, 00:07:24.427 "get_zone_info": false, 00:07:24.427 "zone_management": false, 00:07:24.427 "zone_append": false, 00:07:24.427 "compare": false, 00:07:24.427 "compare_and_write": false, 00:07:24.427 "abort": true, 00:07:24.427 "seek_hole": false, 00:07:24.427 "seek_data": false, 00:07:24.427 "copy": true, 00:07:24.427 "nvme_iov_md": false 00:07:24.427 }, 00:07:24.427 "memory_domains": [ 00:07:24.427 { 00:07:24.427 "dma_device_id": "system", 00:07:24.427 "dma_device_type": 1 00:07:24.427 }, 00:07:24.427 { 00:07:24.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.427 "dma_device_type": 2 00:07:24.427 } 00:07:24.427 ], 00:07:24.427 "driver_specific": {} 00:07:24.427 } 00:07:24.427 ] 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.427 "name": "Existed_Raid", 00:07:24.427 "uuid": "0ad5c56c-d7ba-49ce-875c-8db324f08f6b", 00:07:24.427 "strip_size_kb": 64, 00:07:24.427 "state": "configuring", 00:07:24.427 "raid_level": "concat", 00:07:24.427 "superblock": true, 00:07:24.427 "num_base_bdevs": 3, 00:07:24.427 "num_base_bdevs_discovered": 2, 00:07:24.427 "num_base_bdevs_operational": 3, 00:07:24.427 "base_bdevs_list": [ 00:07:24.427 { 00:07:24.427 "name": "BaseBdev1", 00:07:24.427 "uuid": "d8b5d736-d81f-4836-b0f2-9323a4be5a08", 00:07:24.427 "is_configured": true, 00:07:24.427 "data_offset": 2048, 00:07:24.427 "data_size": 63488 00:07:24.427 }, 00:07:24.427 { 00:07:24.427 "name": null, 00:07:24.427 "uuid": "85f7e651-3033-4123-afbd-d6a2bea0a966", 00:07:24.427 "is_configured": false, 00:07:24.427 "data_offset": 0, 00:07:24.427 "data_size": 63488 00:07:24.427 }, 00:07:24.427 { 00:07:24.427 "name": "BaseBdev3", 00:07:24.427 "uuid": "c5b07487-a8ed-4cdb-812b-61107f83d27e", 00:07:24.427 "is_configured": true, 00:07:24.427 "data_offset": 2048, 00:07:24.427 "data_size": 63488 00:07:24.427 } 00:07:24.427 ] 00:07:24.427 }' 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.427 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.687 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.687 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.687 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:24.687 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.687 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.948 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:24.948 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:24.948 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.948 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.949 [2024-10-01 14:32:16.382797] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.949 "name": "Existed_Raid", 00:07:24.949 "uuid": "0ad5c56c-d7ba-49ce-875c-8db324f08f6b", 00:07:24.949 "strip_size_kb": 64, 00:07:24.949 "state": "configuring", 00:07:24.949 "raid_level": "concat", 00:07:24.949 "superblock": true, 00:07:24.949 "num_base_bdevs": 3, 00:07:24.949 "num_base_bdevs_discovered": 1, 00:07:24.949 "num_base_bdevs_operational": 3, 00:07:24.949 "base_bdevs_list": [ 00:07:24.949 { 00:07:24.949 "name": "BaseBdev1", 00:07:24.949 "uuid": "d8b5d736-d81f-4836-b0f2-9323a4be5a08", 00:07:24.949 "is_configured": true, 00:07:24.949 "data_offset": 2048, 00:07:24.949 "data_size": 63488 00:07:24.949 }, 00:07:24.949 { 00:07:24.949 "name": null, 00:07:24.949 "uuid": "85f7e651-3033-4123-afbd-d6a2bea0a966", 00:07:24.949 "is_configured": false, 00:07:24.949 "data_offset": 0, 00:07:24.949 "data_size": 63488 00:07:24.949 }, 00:07:24.949 { 00:07:24.949 "name": null, 00:07:24.949 "uuid": "c5b07487-a8ed-4cdb-812b-61107f83d27e", 00:07:24.949 "is_configured": false, 00:07:24.949 "data_offset": 0, 00:07:24.949 "data_size": 63488 00:07:24.949 } 00:07:24.949 ] 00:07:24.949 }' 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.949 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.210 [2024-10-01 14:32:16.726883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.210 "name": "Existed_Raid", 00:07:25.210 "uuid": "0ad5c56c-d7ba-49ce-875c-8db324f08f6b", 00:07:25.210 "strip_size_kb": 64, 00:07:25.210 "state": "configuring", 00:07:25.210 "raid_level": "concat", 00:07:25.210 "superblock": true, 00:07:25.210 "num_base_bdevs": 3, 00:07:25.210 "num_base_bdevs_discovered": 2, 00:07:25.210 "num_base_bdevs_operational": 3, 00:07:25.210 "base_bdevs_list": [ 00:07:25.210 { 00:07:25.210 "name": "BaseBdev1", 00:07:25.210 "uuid": "d8b5d736-d81f-4836-b0f2-9323a4be5a08", 00:07:25.210 "is_configured": true, 00:07:25.210 "data_offset": 2048, 00:07:25.210 "data_size": 63488 00:07:25.210 }, 00:07:25.210 { 00:07:25.210 "name": null, 00:07:25.210 "uuid": "85f7e651-3033-4123-afbd-d6a2bea0a966", 00:07:25.210 "is_configured": false, 00:07:25.210 "data_offset": 0, 00:07:25.210 "data_size": 63488 00:07:25.210 }, 00:07:25.210 { 00:07:25.210 "name": "BaseBdev3", 00:07:25.210 "uuid": "c5b07487-a8ed-4cdb-812b-61107f83d27e", 00:07:25.210 "is_configured": true, 00:07:25.210 "data_offset": 2048, 00:07:25.210 "data_size": 63488 00:07:25.210 } 00:07:25.210 ] 00:07:25.210 }' 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.210 14:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.470 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.470 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:25.470 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.470 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.470 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.470 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.471 [2024-10-01 14:32:17.079007] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.471 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.730 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.730 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.730 "name": "Existed_Raid", 00:07:25.730 "uuid": "0ad5c56c-d7ba-49ce-875c-8db324f08f6b", 00:07:25.730 "strip_size_kb": 64, 00:07:25.730 "state": "configuring", 00:07:25.730 "raid_level": "concat", 00:07:25.730 "superblock": true, 00:07:25.730 "num_base_bdevs": 3, 00:07:25.730 "num_base_bdevs_discovered": 1, 00:07:25.730 "num_base_bdevs_operational": 3, 00:07:25.730 "base_bdevs_list": [ 00:07:25.730 { 00:07:25.730 "name": null, 00:07:25.730 "uuid": "d8b5d736-d81f-4836-b0f2-9323a4be5a08", 00:07:25.730 "is_configured": false, 00:07:25.730 "data_offset": 0, 00:07:25.730 "data_size": 63488 00:07:25.730 }, 00:07:25.730 { 00:07:25.730 "name": null, 00:07:25.730 "uuid": "85f7e651-3033-4123-afbd-d6a2bea0a966", 00:07:25.730 "is_configured": false, 00:07:25.730 "data_offset": 0, 00:07:25.730 "data_size": 63488 00:07:25.730 }, 00:07:25.730 { 00:07:25.730 "name": "BaseBdev3", 00:07:25.730 "uuid": "c5b07487-a8ed-4cdb-812b-61107f83d27e", 00:07:25.730 "is_configured": true, 00:07:25.730 "data_offset": 2048, 00:07:25.730 "data_size": 63488 00:07:25.730 } 00:07:25.730 ] 00:07:25.730 }' 00:07:25.730 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.730 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.990 [2024-10-01 14:32:17.498668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.990 "name": "Existed_Raid", 00:07:25.990 "uuid": "0ad5c56c-d7ba-49ce-875c-8db324f08f6b", 00:07:25.990 "strip_size_kb": 64, 00:07:25.990 "state": "configuring", 00:07:25.990 "raid_level": "concat", 00:07:25.990 "superblock": true, 00:07:25.990 "num_base_bdevs": 3, 00:07:25.990 "num_base_bdevs_discovered": 2, 00:07:25.990 "num_base_bdevs_operational": 3, 00:07:25.990 "base_bdevs_list": [ 00:07:25.990 { 00:07:25.990 "name": null, 00:07:25.990 "uuid": "d8b5d736-d81f-4836-b0f2-9323a4be5a08", 00:07:25.990 "is_configured": false, 00:07:25.990 "data_offset": 0, 00:07:25.990 "data_size": 63488 00:07:25.990 }, 00:07:25.990 { 00:07:25.990 "name": "BaseBdev2", 00:07:25.990 "uuid": "85f7e651-3033-4123-afbd-d6a2bea0a966", 00:07:25.990 "is_configured": true, 00:07:25.990 "data_offset": 2048, 00:07:25.990 "data_size": 63488 00:07:25.990 }, 00:07:25.990 { 00:07:25.990 "name": "BaseBdev3", 00:07:25.990 "uuid": "c5b07487-a8ed-4cdb-812b-61107f83d27e", 00:07:25.990 "is_configured": true, 00:07:25.990 "data_offset": 2048, 00:07:25.990 "data_size": 63488 00:07:25.990 } 00:07:25.990 ] 00:07:25.990 }' 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.990 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d8b5d736-d81f-4836-b0f2-9323a4be5a08 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.252 [2024-10-01 14:32:17.901143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:26.252 [2024-10-01 14:32:17.901348] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:26.252 [2024-10-01 14:32:17.901365] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:26.252 [2024-10-01 14:32:17.901628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:26.252 [2024-10-01 14:32:17.901765] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:26.252 NewBaseBdev 00:07:26.252 [2024-10-01 14:32:17.901813] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:26.252 [2024-10-01 14:32:17.901937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.252 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.252 [ 00:07:26.252 { 00:07:26.252 "name": "NewBaseBdev", 00:07:26.252 "aliases": [ 00:07:26.252 "d8b5d736-d81f-4836-b0f2-9323a4be5a08" 00:07:26.252 ], 00:07:26.252 "product_name": "Malloc disk", 00:07:26.252 "block_size": 512, 00:07:26.252 "num_blocks": 65536, 00:07:26.252 "uuid": "d8b5d736-d81f-4836-b0f2-9323a4be5a08", 00:07:26.252 "assigned_rate_limits": { 00:07:26.252 "rw_ios_per_sec": 0, 00:07:26.252 "rw_mbytes_per_sec": 0, 00:07:26.252 "r_mbytes_per_sec": 0, 00:07:26.252 "w_mbytes_per_sec": 0 00:07:26.252 }, 00:07:26.252 "claimed": true, 00:07:26.252 "claim_type": "exclusive_write", 00:07:26.252 "zoned": false, 00:07:26.252 "supported_io_types": { 00:07:26.252 "read": true, 00:07:26.252 "write": true, 00:07:26.252 "unmap": true, 00:07:26.252 "flush": true, 00:07:26.252 "reset": true, 00:07:26.252 "nvme_admin": false, 00:07:26.253 "nvme_io": false, 00:07:26.253 "nvme_io_md": false, 00:07:26.253 "write_zeroes": true, 00:07:26.253 "zcopy": true, 00:07:26.253 "get_zone_info": false, 00:07:26.253 "zone_management": false, 00:07:26.253 "zone_append": false, 00:07:26.253 "compare": false, 00:07:26.253 "compare_and_write": false, 00:07:26.253 "abort": true, 00:07:26.253 "seek_hole": false, 00:07:26.253 "seek_data": false, 00:07:26.253 "copy": true, 00:07:26.253 "nvme_iov_md": false 00:07:26.253 }, 00:07:26.253 "memory_domains": [ 00:07:26.253 { 00:07:26.253 "dma_device_id": "system", 00:07:26.253 "dma_device_type": 1 00:07:26.253 }, 00:07:26.253 { 00:07:26.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.253 "dma_device_type": 2 00:07:26.253 } 00:07:26.253 ], 00:07:26.253 "driver_specific": {} 00:07:26.253 } 00:07:26.253 ] 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.253 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.513 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.513 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.513 "name": "Existed_Raid", 00:07:26.513 "uuid": "0ad5c56c-d7ba-49ce-875c-8db324f08f6b", 00:07:26.513 "strip_size_kb": 64, 00:07:26.513 "state": "online", 00:07:26.513 "raid_level": "concat", 00:07:26.513 "superblock": true, 00:07:26.513 "num_base_bdevs": 3, 00:07:26.513 "num_base_bdevs_discovered": 3, 00:07:26.513 "num_base_bdevs_operational": 3, 00:07:26.513 "base_bdevs_list": [ 00:07:26.513 { 00:07:26.513 "name": "NewBaseBdev", 00:07:26.513 "uuid": "d8b5d736-d81f-4836-b0f2-9323a4be5a08", 00:07:26.513 "is_configured": true, 00:07:26.513 "data_offset": 2048, 00:07:26.513 "data_size": 63488 00:07:26.513 }, 00:07:26.513 { 00:07:26.513 "name": "BaseBdev2", 00:07:26.513 "uuid": "85f7e651-3033-4123-afbd-d6a2bea0a966", 00:07:26.513 "is_configured": true, 00:07:26.513 "data_offset": 2048, 00:07:26.513 "data_size": 63488 00:07:26.513 }, 00:07:26.513 { 00:07:26.513 "name": "BaseBdev3", 00:07:26.513 "uuid": "c5b07487-a8ed-4cdb-812b-61107f83d27e", 00:07:26.513 "is_configured": true, 00:07:26.513 "data_offset": 2048, 00:07:26.513 "data_size": 63488 00:07:26.513 } 00:07:26.513 ] 00:07:26.513 }' 00:07:26.513 14:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.513 14:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.773 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:26.773 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:26.773 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.773 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.773 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.773 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.773 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.773 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:26.773 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.773 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.773 [2024-10-01 14:32:18.237613] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.773 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.773 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.773 "name": "Existed_Raid", 00:07:26.773 "aliases": [ 00:07:26.773 "0ad5c56c-d7ba-49ce-875c-8db324f08f6b" 00:07:26.773 ], 00:07:26.773 "product_name": "Raid Volume", 00:07:26.773 "block_size": 512, 00:07:26.773 "num_blocks": 190464, 00:07:26.773 "uuid": "0ad5c56c-d7ba-49ce-875c-8db324f08f6b", 00:07:26.773 "assigned_rate_limits": { 00:07:26.773 "rw_ios_per_sec": 0, 00:07:26.773 "rw_mbytes_per_sec": 0, 00:07:26.773 "r_mbytes_per_sec": 0, 00:07:26.773 "w_mbytes_per_sec": 0 00:07:26.773 }, 00:07:26.773 "claimed": false, 00:07:26.773 "zoned": false, 00:07:26.773 "supported_io_types": { 00:07:26.773 "read": true, 00:07:26.773 "write": true, 00:07:26.773 "unmap": true, 00:07:26.773 "flush": true, 00:07:26.773 "reset": true, 00:07:26.773 "nvme_admin": false, 00:07:26.773 "nvme_io": false, 00:07:26.773 "nvme_io_md": false, 00:07:26.773 "write_zeroes": true, 00:07:26.773 "zcopy": false, 00:07:26.773 "get_zone_info": false, 00:07:26.773 "zone_management": false, 00:07:26.773 "zone_append": false, 00:07:26.773 "compare": false, 00:07:26.773 "compare_and_write": false, 00:07:26.773 "abort": false, 00:07:26.773 "seek_hole": false, 00:07:26.773 "seek_data": false, 00:07:26.773 "copy": false, 00:07:26.773 "nvme_iov_md": false 00:07:26.773 }, 00:07:26.773 "memory_domains": [ 00:07:26.773 { 00:07:26.773 "dma_device_id": "system", 00:07:26.773 "dma_device_type": 1 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.773 "dma_device_type": 2 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "dma_device_id": "system", 00:07:26.773 "dma_device_type": 1 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.773 "dma_device_type": 2 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "dma_device_id": "system", 00:07:26.773 "dma_device_type": 1 00:07:26.773 }, 00:07:26.774 { 00:07:26.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.774 "dma_device_type": 2 00:07:26.774 } 00:07:26.774 ], 00:07:26.774 "driver_specific": { 00:07:26.774 "raid": { 00:07:26.774 "uuid": "0ad5c56c-d7ba-49ce-875c-8db324f08f6b", 00:07:26.774 "strip_size_kb": 64, 00:07:26.774 "state": "online", 00:07:26.774 "raid_level": "concat", 00:07:26.774 "superblock": true, 00:07:26.774 "num_base_bdevs": 3, 00:07:26.774 "num_base_bdevs_discovered": 3, 00:07:26.774 "num_base_bdevs_operational": 3, 00:07:26.774 "base_bdevs_list": [ 00:07:26.774 { 00:07:26.774 "name": "NewBaseBdev", 00:07:26.774 "uuid": "d8b5d736-d81f-4836-b0f2-9323a4be5a08", 00:07:26.774 "is_configured": true, 00:07:26.774 "data_offset": 2048, 00:07:26.774 "data_size": 63488 00:07:26.774 }, 00:07:26.774 { 00:07:26.774 "name": "BaseBdev2", 00:07:26.774 "uuid": "85f7e651-3033-4123-afbd-d6a2bea0a966", 00:07:26.774 "is_configured": true, 00:07:26.774 "data_offset": 2048, 00:07:26.774 "data_size": 63488 00:07:26.774 }, 00:07:26.774 { 00:07:26.774 "name": "BaseBdev3", 00:07:26.774 "uuid": "c5b07487-a8ed-4cdb-812b-61107f83d27e", 00:07:26.774 "is_configured": true, 00:07:26.774 "data_offset": 2048, 00:07:26.774 "data_size": 63488 00:07:26.774 } 00:07:26.774 ] 00:07:26.774 } 00:07:26.774 } 00:07:26.774 }' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:26.774 BaseBdev2 00:07:26.774 BaseBdev3' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.774 [2024-10-01 14:32:18.421293] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.774 [2024-10-01 14:32:18.421408] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.774 [2024-10-01 14:32:18.421487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.774 [2024-10-01 14:32:18.421542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.774 [2024-10-01 14:32:18.421554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64869 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64869 ']' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64869 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64869 00:07:26.774 killing process with pid 64869 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64869' 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64869 00:07:26.774 [2024-10-01 14:32:18.450454] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.774 14:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64869 00:07:27.036 [2024-10-01 14:32:18.640162] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.978 ************************************ 00:07:27.978 END TEST raid_state_function_test_sb 00:07:27.978 ************************************ 00:07:27.978 14:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:27.978 00:07:27.978 real 0m7.815s 00:07:27.978 user 0m12.369s 00:07:27.978 sys 0m1.210s 00:07:27.978 14:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.978 14:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.978 14:32:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:07:27.978 14:32:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:27.978 14:32:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.978 14:32:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.978 ************************************ 00:07:27.978 START TEST raid_superblock_test 00:07:27.978 ************************************ 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65466 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65466 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 65466 ']' 00:07:27.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.978 14:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.978 [2024-10-01 14:32:19.603276] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:27.978 [2024-10-01 14:32:19.603388] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65466 ] 00:07:28.237 [2024-10-01 14:32:19.753966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.586 [2024-10-01 14:32:19.937492] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.586 [2024-10-01 14:32:20.073023] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.586 [2024-10-01 14:32:20.073049] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.847 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.847 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:28.847 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:28.847 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:28.847 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:28.847 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:28.847 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:28.847 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:28.847 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:28.847 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:28.847 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:28.847 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.847 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.109 malloc1 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.109 [2024-10-01 14:32:20.545728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:29.109 [2024-10-01 14:32:20.545785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.109 [2024-10-01 14:32:20.545807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:29.109 [2024-10-01 14:32:20.545819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.109 [2024-10-01 14:32:20.548116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.109 [2024-10-01 14:32:20.548257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:29.109 pt1 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.109 malloc2 00:07:29.109 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.110 [2024-10-01 14:32:20.595162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:29.110 [2024-10-01 14:32:20.595214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.110 [2024-10-01 14:32:20.595236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:29.110 [2024-10-01 14:32:20.595245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.110 [2024-10-01 14:32:20.597340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.110 [2024-10-01 14:32:20.597374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:29.110 pt2 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.110 malloc3 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.110 [2024-10-01 14:32:20.634952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:29.110 [2024-10-01 14:32:20.634994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.110 [2024-10-01 14:32:20.635014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:29.110 [2024-10-01 14:32:20.635023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.110 [2024-10-01 14:32:20.637086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.110 [2024-10-01 14:32:20.637219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:29.110 pt3 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.110 [2024-10-01 14:32:20.647017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:29.110 [2024-10-01 14:32:20.648872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:29.110 [2024-10-01 14:32:20.648934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:29.110 [2024-10-01 14:32:20.649085] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:29.110 [2024-10-01 14:32:20.649098] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:29.110 [2024-10-01 14:32:20.649341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:29.110 [2024-10-01 14:32:20.649491] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:29.110 [2024-10-01 14:32:20.649501] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:29.110 [2024-10-01 14:32:20.649640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.110 "name": "raid_bdev1", 00:07:29.110 "uuid": "b1496f2d-71da-495a-9a87-7f431f9ac0a8", 00:07:29.110 "strip_size_kb": 64, 00:07:29.110 "state": "online", 00:07:29.110 "raid_level": "concat", 00:07:29.110 "superblock": true, 00:07:29.110 "num_base_bdevs": 3, 00:07:29.110 "num_base_bdevs_discovered": 3, 00:07:29.110 "num_base_bdevs_operational": 3, 00:07:29.110 "base_bdevs_list": [ 00:07:29.110 { 00:07:29.110 "name": "pt1", 00:07:29.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:29.110 "is_configured": true, 00:07:29.110 "data_offset": 2048, 00:07:29.110 "data_size": 63488 00:07:29.110 }, 00:07:29.110 { 00:07:29.110 "name": "pt2", 00:07:29.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:29.110 "is_configured": true, 00:07:29.110 "data_offset": 2048, 00:07:29.110 "data_size": 63488 00:07:29.110 }, 00:07:29.110 { 00:07:29.110 "name": "pt3", 00:07:29.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:29.110 "is_configured": true, 00:07:29.110 "data_offset": 2048, 00:07:29.110 "data_size": 63488 00:07:29.110 } 00:07:29.110 ] 00:07:29.110 }' 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.110 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.371 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:29.371 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:29.371 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:29.371 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:29.371 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:29.371 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:29.371 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:29.371 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:29.371 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.371 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.371 [2024-10-01 14:32:20.963387] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.371 14:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.371 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:29.371 "name": "raid_bdev1", 00:07:29.371 "aliases": [ 00:07:29.371 "b1496f2d-71da-495a-9a87-7f431f9ac0a8" 00:07:29.371 ], 00:07:29.371 "product_name": "Raid Volume", 00:07:29.371 "block_size": 512, 00:07:29.371 "num_blocks": 190464, 00:07:29.371 "uuid": "b1496f2d-71da-495a-9a87-7f431f9ac0a8", 00:07:29.371 "assigned_rate_limits": { 00:07:29.371 "rw_ios_per_sec": 0, 00:07:29.371 "rw_mbytes_per_sec": 0, 00:07:29.371 "r_mbytes_per_sec": 0, 00:07:29.371 "w_mbytes_per_sec": 0 00:07:29.371 }, 00:07:29.371 "claimed": false, 00:07:29.371 "zoned": false, 00:07:29.371 "supported_io_types": { 00:07:29.371 "read": true, 00:07:29.371 "write": true, 00:07:29.371 "unmap": true, 00:07:29.371 "flush": true, 00:07:29.371 "reset": true, 00:07:29.371 "nvme_admin": false, 00:07:29.371 "nvme_io": false, 00:07:29.371 "nvme_io_md": false, 00:07:29.371 "write_zeroes": true, 00:07:29.371 "zcopy": false, 00:07:29.371 "get_zone_info": false, 00:07:29.371 "zone_management": false, 00:07:29.371 "zone_append": false, 00:07:29.371 "compare": false, 00:07:29.371 "compare_and_write": false, 00:07:29.371 "abort": false, 00:07:29.371 "seek_hole": false, 00:07:29.371 "seek_data": false, 00:07:29.371 "copy": false, 00:07:29.371 "nvme_iov_md": false 00:07:29.371 }, 00:07:29.371 "memory_domains": [ 00:07:29.371 { 00:07:29.371 "dma_device_id": "system", 00:07:29.371 "dma_device_type": 1 00:07:29.371 }, 00:07:29.371 { 00:07:29.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.371 "dma_device_type": 2 00:07:29.371 }, 00:07:29.371 { 00:07:29.371 "dma_device_id": "system", 00:07:29.371 "dma_device_type": 1 00:07:29.371 }, 00:07:29.371 { 00:07:29.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.371 "dma_device_type": 2 00:07:29.371 }, 00:07:29.371 { 00:07:29.371 "dma_device_id": "system", 00:07:29.371 "dma_device_type": 1 00:07:29.371 }, 00:07:29.371 { 00:07:29.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.371 "dma_device_type": 2 00:07:29.371 } 00:07:29.371 ], 00:07:29.371 "driver_specific": { 00:07:29.371 "raid": { 00:07:29.371 "uuid": "b1496f2d-71da-495a-9a87-7f431f9ac0a8", 00:07:29.371 "strip_size_kb": 64, 00:07:29.371 "state": "online", 00:07:29.371 "raid_level": "concat", 00:07:29.371 "superblock": true, 00:07:29.371 "num_base_bdevs": 3, 00:07:29.371 "num_base_bdevs_discovered": 3, 00:07:29.371 "num_base_bdevs_operational": 3, 00:07:29.371 "base_bdevs_list": [ 00:07:29.371 { 00:07:29.371 "name": "pt1", 00:07:29.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:29.371 "is_configured": true, 00:07:29.371 "data_offset": 2048, 00:07:29.371 "data_size": 63488 00:07:29.371 }, 00:07:29.371 { 00:07:29.371 "name": "pt2", 00:07:29.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:29.371 "is_configured": true, 00:07:29.371 "data_offset": 2048, 00:07:29.371 "data_size": 63488 00:07:29.371 }, 00:07:29.371 { 00:07:29.371 "name": "pt3", 00:07:29.371 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:29.371 "is_configured": true, 00:07:29.371 "data_offset": 2048, 00:07:29.371 "data_size": 63488 00:07:29.371 } 00:07:29.371 ] 00:07:29.371 } 00:07:29.371 } 00:07:29.371 }' 00:07:29.371 14:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:29.371 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:29.371 pt2 00:07:29.371 pt3' 00:07:29.371 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.371 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:29.371 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.372 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:29.372 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.372 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.372 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.631 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.631 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.631 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.631 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.632 [2024-10-01 14:32:21.159365] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b1496f2d-71da-495a-9a87-7f431f9ac0a8 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b1496f2d-71da-495a-9a87-7f431f9ac0a8 ']' 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.632 [2024-10-01 14:32:21.191072] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:29.632 [2024-10-01 14:32:21.191098] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.632 [2024-10-01 14:32:21.191159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.632 [2024-10-01 14:32:21.191220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.632 [2024-10-01 14:32:21.191232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.632 [2024-10-01 14:32:21.303134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:29.632 [2024-10-01 14:32:21.305040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:29.632 [2024-10-01 14:32:21.305085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:29.632 [2024-10-01 14:32:21.305131] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:29.632 [2024-10-01 14:32:21.305177] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:29.632 [2024-10-01 14:32:21.305198] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:29.632 [2024-10-01 14:32:21.305214] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:29.632 [2024-10-01 14:32:21.305224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:29.632 request: 00:07:29.632 { 00:07:29.632 "name": "raid_bdev1", 00:07:29.632 "raid_level": "concat", 00:07:29.632 "base_bdevs": [ 00:07:29.632 "malloc1", 00:07:29.632 "malloc2", 00:07:29.632 "malloc3" 00:07:29.632 ], 00:07:29.632 "strip_size_kb": 64, 00:07:29.632 "superblock": false, 00:07:29.632 "method": "bdev_raid_create", 00:07:29.632 "req_id": 1 00:07:29.632 } 00:07:29.632 Got JSON-RPC error response 00:07:29.632 response: 00:07:29.632 { 00:07:29.632 "code": -17, 00:07:29.632 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:29.632 } 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.632 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.893 [2024-10-01 14:32:21.347116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:29.893 [2024-10-01 14:32:21.347157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.893 [2024-10-01 14:32:21.347173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:29.893 [2024-10-01 14:32:21.347181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.893 [2024-10-01 14:32:21.349311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.893 [2024-10-01 14:32:21.349442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:29.893 [2024-10-01 14:32:21.349524] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:29.893 [2024-10-01 14:32:21.349571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:29.893 pt1 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.893 "name": "raid_bdev1", 00:07:29.893 "uuid": "b1496f2d-71da-495a-9a87-7f431f9ac0a8", 00:07:29.893 "strip_size_kb": 64, 00:07:29.893 "state": "configuring", 00:07:29.893 "raid_level": "concat", 00:07:29.893 "superblock": true, 00:07:29.893 "num_base_bdevs": 3, 00:07:29.893 "num_base_bdevs_discovered": 1, 00:07:29.893 "num_base_bdevs_operational": 3, 00:07:29.893 "base_bdevs_list": [ 00:07:29.893 { 00:07:29.893 "name": "pt1", 00:07:29.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:29.893 "is_configured": true, 00:07:29.893 "data_offset": 2048, 00:07:29.893 "data_size": 63488 00:07:29.893 }, 00:07:29.893 { 00:07:29.893 "name": null, 00:07:29.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:29.893 "is_configured": false, 00:07:29.893 "data_offset": 2048, 00:07:29.893 "data_size": 63488 00:07:29.893 }, 00:07:29.893 { 00:07:29.893 "name": null, 00:07:29.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:29.893 "is_configured": false, 00:07:29.893 "data_offset": 2048, 00:07:29.893 "data_size": 63488 00:07:29.893 } 00:07:29.893 ] 00:07:29.893 }' 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.893 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.155 [2024-10-01 14:32:21.671221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:30.155 [2024-10-01 14:32:21.671283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.155 [2024-10-01 14:32:21.671306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:07:30.155 [2024-10-01 14:32:21.671316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.155 [2024-10-01 14:32:21.671742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.155 [2024-10-01 14:32:21.671757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:30.155 [2024-10-01 14:32:21.671832] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:30.155 [2024-10-01 14:32:21.671853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:30.155 pt2 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.155 [2024-10-01 14:32:21.679230] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.155 "name": "raid_bdev1", 00:07:30.155 "uuid": "b1496f2d-71da-495a-9a87-7f431f9ac0a8", 00:07:30.155 "strip_size_kb": 64, 00:07:30.155 "state": "configuring", 00:07:30.155 "raid_level": "concat", 00:07:30.155 "superblock": true, 00:07:30.155 "num_base_bdevs": 3, 00:07:30.155 "num_base_bdevs_discovered": 1, 00:07:30.155 "num_base_bdevs_operational": 3, 00:07:30.155 "base_bdevs_list": [ 00:07:30.155 { 00:07:30.155 "name": "pt1", 00:07:30.155 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.155 "is_configured": true, 00:07:30.155 "data_offset": 2048, 00:07:30.155 "data_size": 63488 00:07:30.155 }, 00:07:30.155 { 00:07:30.155 "name": null, 00:07:30.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.155 "is_configured": false, 00:07:30.155 "data_offset": 0, 00:07:30.155 "data_size": 63488 00:07:30.155 }, 00:07:30.155 { 00:07:30.155 "name": null, 00:07:30.155 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:30.155 "is_configured": false, 00:07:30.155 "data_offset": 2048, 00:07:30.155 "data_size": 63488 00:07:30.155 } 00:07:30.155 ] 00:07:30.155 }' 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.155 14:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.417 [2024-10-01 14:32:22.007289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:30.417 [2024-10-01 14:32:22.007352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.417 [2024-10-01 14:32:22.007371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:07:30.417 [2024-10-01 14:32:22.007383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.417 [2024-10-01 14:32:22.007812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.417 [2024-10-01 14:32:22.007831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:30.417 [2024-10-01 14:32:22.007906] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:30.417 [2024-10-01 14:32:22.007935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:30.417 pt2 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.417 [2024-10-01 14:32:22.015293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:30.417 [2024-10-01 14:32:22.015336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.417 [2024-10-01 14:32:22.015350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:07:30.417 [2024-10-01 14:32:22.015360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.417 [2024-10-01 14:32:22.015718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.417 [2024-10-01 14:32:22.015737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:30.417 [2024-10-01 14:32:22.015793] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:30.417 [2024-10-01 14:32:22.015812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:30.417 [2024-10-01 14:32:22.015923] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:30.417 [2024-10-01 14:32:22.015935] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:30.417 [2024-10-01 14:32:22.016164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:30.417 [2024-10-01 14:32:22.016291] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:30.417 [2024-10-01 14:32:22.016299] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:30.417 [2024-10-01 14:32:22.016418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.417 pt3 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.417 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.417 "name": "raid_bdev1", 00:07:30.417 "uuid": "b1496f2d-71da-495a-9a87-7f431f9ac0a8", 00:07:30.417 "strip_size_kb": 64, 00:07:30.417 "state": "online", 00:07:30.417 "raid_level": "concat", 00:07:30.417 "superblock": true, 00:07:30.417 "num_base_bdevs": 3, 00:07:30.417 "num_base_bdevs_discovered": 3, 00:07:30.417 "num_base_bdevs_operational": 3, 00:07:30.417 "base_bdevs_list": [ 00:07:30.417 { 00:07:30.417 "name": "pt1", 00:07:30.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.418 "is_configured": true, 00:07:30.418 "data_offset": 2048, 00:07:30.418 "data_size": 63488 00:07:30.418 }, 00:07:30.418 { 00:07:30.418 "name": "pt2", 00:07:30.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.418 "is_configured": true, 00:07:30.418 "data_offset": 2048, 00:07:30.418 "data_size": 63488 00:07:30.418 }, 00:07:30.418 { 00:07:30.418 "name": "pt3", 00:07:30.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:30.418 "is_configured": true, 00:07:30.418 "data_offset": 2048, 00:07:30.418 "data_size": 63488 00:07:30.418 } 00:07:30.418 ] 00:07:30.418 }' 00:07:30.418 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.418 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.677 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:30.677 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:30.677 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:30.677 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:30.677 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:30.677 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:30.677 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:30.677 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.677 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.677 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:30.677 [2024-10-01 14:32:22.343686] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.677 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.936 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:30.936 "name": "raid_bdev1", 00:07:30.936 "aliases": [ 00:07:30.936 "b1496f2d-71da-495a-9a87-7f431f9ac0a8" 00:07:30.936 ], 00:07:30.936 "product_name": "Raid Volume", 00:07:30.936 "block_size": 512, 00:07:30.936 "num_blocks": 190464, 00:07:30.936 "uuid": "b1496f2d-71da-495a-9a87-7f431f9ac0a8", 00:07:30.936 "assigned_rate_limits": { 00:07:30.936 "rw_ios_per_sec": 0, 00:07:30.936 "rw_mbytes_per_sec": 0, 00:07:30.936 "r_mbytes_per_sec": 0, 00:07:30.936 "w_mbytes_per_sec": 0 00:07:30.936 }, 00:07:30.936 "claimed": false, 00:07:30.936 "zoned": false, 00:07:30.936 "supported_io_types": { 00:07:30.936 "read": true, 00:07:30.936 "write": true, 00:07:30.936 "unmap": true, 00:07:30.936 "flush": true, 00:07:30.936 "reset": true, 00:07:30.936 "nvme_admin": false, 00:07:30.936 "nvme_io": false, 00:07:30.936 "nvme_io_md": false, 00:07:30.936 "write_zeroes": true, 00:07:30.936 "zcopy": false, 00:07:30.936 "get_zone_info": false, 00:07:30.936 "zone_management": false, 00:07:30.936 "zone_append": false, 00:07:30.936 "compare": false, 00:07:30.936 "compare_and_write": false, 00:07:30.936 "abort": false, 00:07:30.936 "seek_hole": false, 00:07:30.936 "seek_data": false, 00:07:30.936 "copy": false, 00:07:30.936 "nvme_iov_md": false 00:07:30.936 }, 00:07:30.936 "memory_domains": [ 00:07:30.936 { 00:07:30.936 "dma_device_id": "system", 00:07:30.936 "dma_device_type": 1 00:07:30.936 }, 00:07:30.936 { 00:07:30.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.936 "dma_device_type": 2 00:07:30.936 }, 00:07:30.936 { 00:07:30.936 "dma_device_id": "system", 00:07:30.936 "dma_device_type": 1 00:07:30.936 }, 00:07:30.936 { 00:07:30.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.936 "dma_device_type": 2 00:07:30.936 }, 00:07:30.936 { 00:07:30.936 "dma_device_id": "system", 00:07:30.936 "dma_device_type": 1 00:07:30.936 }, 00:07:30.936 { 00:07:30.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.936 "dma_device_type": 2 00:07:30.936 } 00:07:30.936 ], 00:07:30.936 "driver_specific": { 00:07:30.936 "raid": { 00:07:30.936 "uuid": "b1496f2d-71da-495a-9a87-7f431f9ac0a8", 00:07:30.936 "strip_size_kb": 64, 00:07:30.936 "state": "online", 00:07:30.936 "raid_level": "concat", 00:07:30.936 "superblock": true, 00:07:30.936 "num_base_bdevs": 3, 00:07:30.936 "num_base_bdevs_discovered": 3, 00:07:30.937 "num_base_bdevs_operational": 3, 00:07:30.937 "base_bdevs_list": [ 00:07:30.937 { 00:07:30.937 "name": "pt1", 00:07:30.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.937 "is_configured": true, 00:07:30.937 "data_offset": 2048, 00:07:30.937 "data_size": 63488 00:07:30.937 }, 00:07:30.937 { 00:07:30.937 "name": "pt2", 00:07:30.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.937 "is_configured": true, 00:07:30.937 "data_offset": 2048, 00:07:30.937 "data_size": 63488 00:07:30.937 }, 00:07:30.937 { 00:07:30.937 "name": "pt3", 00:07:30.937 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:30.937 "is_configured": true, 00:07:30.937 "data_offset": 2048, 00:07:30.937 "data_size": 63488 00:07:30.937 } 00:07:30.937 ] 00:07:30.937 } 00:07:30.937 } 00:07:30.937 }' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:30.937 pt2 00:07:30.937 pt3' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:30.937 [2024-10-01 14:32:22.539688] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b1496f2d-71da-495a-9a87-7f431f9ac0a8 '!=' b1496f2d-71da-495a-9a87-7f431f9ac0a8 ']' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65466 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 65466 ']' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 65466 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65466 00:07:30.937 killing process with pid 65466 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65466' 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 65466 00:07:30.937 [2024-10-01 14:32:22.597526] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.937 [2024-10-01 14:32:22.597611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.937 14:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 65466 00:07:30.937 [2024-10-01 14:32:22.597673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.937 [2024-10-01 14:32:22.597688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:31.195 [2024-10-01 14:32:22.786579] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.133 ************************************ 00:07:32.133 END TEST raid_superblock_test 00:07:32.133 ************************************ 00:07:32.133 14:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:32.133 00:07:32.133 real 0m4.072s 00:07:32.133 user 0m5.805s 00:07:32.133 sys 0m0.620s 00:07:32.133 14:32:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.133 14:32:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.133 14:32:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:07:32.133 14:32:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:32.133 14:32:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.133 14:32:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.133 ************************************ 00:07:32.133 START TEST raid_read_error_test 00:07:32.133 ************************************ 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vAiGPSDl6q 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65704 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65704 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65704 ']' 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.133 14:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:32.133 [2024-10-01 14:32:23.752336] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:32.133 [2024-10-01 14:32:23.752453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65704 ] 00:07:32.394 [2024-10-01 14:32:23.902434] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.689 [2024-10-01 14:32:24.096846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.689 [2024-10-01 14:32:24.231993] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.689 [2024-10-01 14:32:24.232157] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.949 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.949 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:32.949 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:32.949 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:32.949 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.949 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.210 BaseBdev1_malloc 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.210 true 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.210 [2024-10-01 14:32:24.664306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:33.210 [2024-10-01 14:32:24.664357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.210 [2024-10-01 14:32:24.664376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:33.210 [2024-10-01 14:32:24.664386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.210 [2024-10-01 14:32:24.666518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.210 [2024-10-01 14:32:24.666557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:33.210 BaseBdev1 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.210 BaseBdev2_malloc 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.210 true 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.210 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.210 [2024-10-01 14:32:24.725154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:33.210 [2024-10-01 14:32:24.725205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.210 [2024-10-01 14:32:24.725221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:33.210 [2024-10-01 14:32:24.725231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.211 [2024-10-01 14:32:24.727319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.211 [2024-10-01 14:32:24.727462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:33.211 BaseBdev2 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.211 BaseBdev3_malloc 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.211 true 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.211 [2024-10-01 14:32:24.768872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:33.211 [2024-10-01 14:32:24.768913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.211 [2024-10-01 14:32:24.768928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:33.211 [2024-10-01 14:32:24.768939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.211 [2024-10-01 14:32:24.771512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.211 [2024-10-01 14:32:24.771557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:33.211 BaseBdev3 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.211 [2024-10-01 14:32:24.776952] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.211 [2024-10-01 14:32:24.778791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.211 [2024-10-01 14:32:24.778867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:33.211 [2024-10-01 14:32:24.779065] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:33.211 [2024-10-01 14:32:24.779075] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:33.211 [2024-10-01 14:32:24.779335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:33.211 [2024-10-01 14:32:24.779473] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:33.211 [2024-10-01 14:32:24.779484] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:33.211 [2024-10-01 14:32:24.779619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.211 "name": "raid_bdev1", 00:07:33.211 "uuid": "fbd7c41b-b5d7-477f-be56-a809c93803f0", 00:07:33.211 "strip_size_kb": 64, 00:07:33.211 "state": "online", 00:07:33.211 "raid_level": "concat", 00:07:33.211 "superblock": true, 00:07:33.211 "num_base_bdevs": 3, 00:07:33.211 "num_base_bdevs_discovered": 3, 00:07:33.211 "num_base_bdevs_operational": 3, 00:07:33.211 "base_bdevs_list": [ 00:07:33.211 { 00:07:33.211 "name": "BaseBdev1", 00:07:33.211 "uuid": "bbf0bd4d-8f5a-5e03-ac21-59ea0dd5c9f5", 00:07:33.211 "is_configured": true, 00:07:33.211 "data_offset": 2048, 00:07:33.211 "data_size": 63488 00:07:33.211 }, 00:07:33.211 { 00:07:33.211 "name": "BaseBdev2", 00:07:33.211 "uuid": "82bfc325-9146-5672-aadd-1c44fd780d1e", 00:07:33.211 "is_configured": true, 00:07:33.211 "data_offset": 2048, 00:07:33.211 "data_size": 63488 00:07:33.211 }, 00:07:33.211 { 00:07:33.211 "name": "BaseBdev3", 00:07:33.211 "uuid": "f209768d-e564-574e-99e4-c7de0ff02929", 00:07:33.211 "is_configured": true, 00:07:33.211 "data_offset": 2048, 00:07:33.211 "data_size": 63488 00:07:33.211 } 00:07:33.211 ] 00:07:33.211 }' 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.211 14:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.472 14:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:33.472 14:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:33.733 [2024-10-01 14:32:25.193987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.675 "name": "raid_bdev1", 00:07:34.675 "uuid": "fbd7c41b-b5d7-477f-be56-a809c93803f0", 00:07:34.675 "strip_size_kb": 64, 00:07:34.675 "state": "online", 00:07:34.675 "raid_level": "concat", 00:07:34.675 "superblock": true, 00:07:34.675 "num_base_bdevs": 3, 00:07:34.675 "num_base_bdevs_discovered": 3, 00:07:34.675 "num_base_bdevs_operational": 3, 00:07:34.675 "base_bdevs_list": [ 00:07:34.675 { 00:07:34.675 "name": "BaseBdev1", 00:07:34.675 "uuid": "bbf0bd4d-8f5a-5e03-ac21-59ea0dd5c9f5", 00:07:34.675 "is_configured": true, 00:07:34.675 "data_offset": 2048, 00:07:34.675 "data_size": 63488 00:07:34.675 }, 00:07:34.675 { 00:07:34.675 "name": "BaseBdev2", 00:07:34.675 "uuid": "82bfc325-9146-5672-aadd-1c44fd780d1e", 00:07:34.675 "is_configured": true, 00:07:34.675 "data_offset": 2048, 00:07:34.675 "data_size": 63488 00:07:34.675 }, 00:07:34.675 { 00:07:34.675 "name": "BaseBdev3", 00:07:34.675 "uuid": "f209768d-e564-574e-99e4-c7de0ff02929", 00:07:34.675 "is_configured": true, 00:07:34.675 "data_offset": 2048, 00:07:34.675 "data_size": 63488 00:07:34.675 } 00:07:34.675 ] 00:07:34.675 }' 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.675 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.937 [2024-10-01 14:32:26.439873] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:34.937 [2024-10-01 14:32:26.439903] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.937 [2024-10-01 14:32:26.442998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.937 [2024-10-01 14:32:26.443040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.937 [2024-10-01 14:32:26.443075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.937 [2024-10-01 14:32:26.443084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:34.937 { 00:07:34.937 "results": [ 00:07:34.937 { 00:07:34.937 "job": "raid_bdev1", 00:07:34.937 "core_mask": "0x1", 00:07:34.937 "workload": "randrw", 00:07:34.937 "percentage": 50, 00:07:34.937 "status": "finished", 00:07:34.937 "queue_depth": 1, 00:07:34.937 "io_size": 131072, 00:07:34.937 "runtime": 1.244048, 00:07:34.937 "iops": 14973.698764034829, 00:07:34.937 "mibps": 1871.7123455043536, 00:07:34.937 "io_failed": 1, 00:07:34.937 "io_timeout": 0, 00:07:34.937 "avg_latency_us": 91.21504618522816, 00:07:34.937 "min_latency_us": 33.28, 00:07:34.937 "max_latency_us": 1688.8123076923077 00:07:34.937 } 00:07:34.937 ], 00:07:34.937 "core_count": 1 00:07:34.937 } 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65704 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65704 ']' 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65704 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65704 00:07:34.937 killing process with pid 65704 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65704' 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65704 00:07:34.937 [2024-10-01 14:32:26.472390] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.937 14:32:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65704 00:07:34.937 [2024-10-01 14:32:26.616300] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.878 14:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vAiGPSDl6q 00:07:35.878 14:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:35.878 14:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:35.878 14:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:07:35.878 14:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:35.878 14:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:35.878 14:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:35.878 14:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:07:35.878 00:07:35.878 real 0m3.820s 00:07:35.878 user 0m4.531s 00:07:35.878 sys 0m0.403s 00:07:35.878 14:32:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.878 ************************************ 00:07:35.878 END TEST raid_read_error_test 00:07:35.878 ************************************ 00:07:35.878 14:32:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.878 14:32:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:07:35.878 14:32:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:35.878 14:32:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.878 14:32:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.152 ************************************ 00:07:36.152 START TEST raid_write_error_test 00:07:36.152 ************************************ 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:36.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oljziHVEUD 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65844 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65844 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65844 ']' 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.152 14:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.152 [2024-10-01 14:32:27.641217] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:36.152 [2024-10-01 14:32:27.641337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65844 ] 00:07:36.152 [2024-10-01 14:32:27.790952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.422 [2024-10-01 14:32:27.978757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.683 [2024-10-01 14:32:28.114162] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.683 [2024-10-01 14:32:28.114215] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.944 BaseBdev1_malloc 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.944 true 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.944 [2024-10-01 14:32:28.562844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:36.944 [2024-10-01 14:32:28.562896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.944 [2024-10-01 14:32:28.562912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:36.944 [2024-10-01 14:32:28.562924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.944 [2024-10-01 14:32:28.565027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.944 [2024-10-01 14:32:28.565166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:36.944 BaseBdev1 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.944 BaseBdev2_malloc 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.944 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 true 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 [2024-10-01 14:32:28.629930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:37.204 [2024-10-01 14:32:28.629981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.204 [2024-10-01 14:32:28.629997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:37.204 [2024-10-01 14:32:28.630007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.204 [2024-10-01 14:32:28.632098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.204 [2024-10-01 14:32:28.632134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:37.204 BaseBdev2 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 BaseBdev3_malloc 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 true 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 [2024-10-01 14:32:28.673736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:37.204 [2024-10-01 14:32:28.673777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.204 [2024-10-01 14:32:28.673791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:37.204 [2024-10-01 14:32:28.673801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.204 [2024-10-01 14:32:28.675876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.204 [2024-10-01 14:32:28.675909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:37.204 BaseBdev3 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.204 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.204 [2024-10-01 14:32:28.681811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.204 [2024-10-01 14:32:28.683601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.205 [2024-10-01 14:32:28.683676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:37.205 [2024-10-01 14:32:28.683882] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:37.205 [2024-10-01 14:32:28.683893] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:37.205 [2024-10-01 14:32:28.684142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:37.205 [2024-10-01 14:32:28.684283] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:37.205 [2024-10-01 14:32:28.684294] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:37.205 [2024-10-01 14:32:28.684426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.205 "name": "raid_bdev1", 00:07:37.205 "uuid": "a89da274-14d7-445a-9ebb-3d8f5bb1e339", 00:07:37.205 "strip_size_kb": 64, 00:07:37.205 "state": "online", 00:07:37.205 "raid_level": "concat", 00:07:37.205 "superblock": true, 00:07:37.205 "num_base_bdevs": 3, 00:07:37.205 "num_base_bdevs_discovered": 3, 00:07:37.205 "num_base_bdevs_operational": 3, 00:07:37.205 "base_bdevs_list": [ 00:07:37.205 { 00:07:37.205 "name": "BaseBdev1", 00:07:37.205 "uuid": "b08d09f7-70a2-5c33-8430-472e4516eb25", 00:07:37.205 "is_configured": true, 00:07:37.205 "data_offset": 2048, 00:07:37.205 "data_size": 63488 00:07:37.205 }, 00:07:37.205 { 00:07:37.205 "name": "BaseBdev2", 00:07:37.205 "uuid": "6a91a678-24e4-578f-8bad-d8ab788a355b", 00:07:37.205 "is_configured": true, 00:07:37.205 "data_offset": 2048, 00:07:37.205 "data_size": 63488 00:07:37.205 }, 00:07:37.205 { 00:07:37.205 "name": "BaseBdev3", 00:07:37.205 "uuid": "b01d0fb9-0d14-5897-bfcf-4ea247be6711", 00:07:37.205 "is_configured": true, 00:07:37.205 "data_offset": 2048, 00:07:37.205 "data_size": 63488 00:07:37.205 } 00:07:37.205 ] 00:07:37.205 }' 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.205 14:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.465 14:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:37.465 14:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:37.465 [2024-10-01 14:32:29.082849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.409 "name": "raid_bdev1", 00:07:38.409 "uuid": "a89da274-14d7-445a-9ebb-3d8f5bb1e339", 00:07:38.409 "strip_size_kb": 64, 00:07:38.409 "state": "online", 00:07:38.409 "raid_level": "concat", 00:07:38.409 "superblock": true, 00:07:38.409 "num_base_bdevs": 3, 00:07:38.409 "num_base_bdevs_discovered": 3, 00:07:38.409 "num_base_bdevs_operational": 3, 00:07:38.409 "base_bdevs_list": [ 00:07:38.409 { 00:07:38.409 "name": "BaseBdev1", 00:07:38.409 "uuid": "b08d09f7-70a2-5c33-8430-472e4516eb25", 00:07:38.409 "is_configured": true, 00:07:38.409 "data_offset": 2048, 00:07:38.409 "data_size": 63488 00:07:38.409 }, 00:07:38.409 { 00:07:38.409 "name": "BaseBdev2", 00:07:38.409 "uuid": "6a91a678-24e4-578f-8bad-d8ab788a355b", 00:07:38.409 "is_configured": true, 00:07:38.409 "data_offset": 2048, 00:07:38.409 "data_size": 63488 00:07:38.409 }, 00:07:38.409 { 00:07:38.409 "name": "BaseBdev3", 00:07:38.409 "uuid": "b01d0fb9-0d14-5897-bfcf-4ea247be6711", 00:07:38.409 "is_configured": true, 00:07:38.409 "data_offset": 2048, 00:07:38.409 "data_size": 63488 00:07:38.409 } 00:07:38.409 ] 00:07:38.409 }' 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.409 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.981 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:38.981 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.981 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.981 [2024-10-01 14:32:30.361982] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:38.981 [2024-10-01 14:32:30.362009] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.981 [2024-10-01 14:32:30.365021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.981 [2024-10-01 14:32:30.365169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.981 [2024-10-01 14:32:30.365219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.981 [2024-10-01 14:32:30.365230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:38.981 { 00:07:38.981 "results": [ 00:07:38.981 { 00:07:38.981 "job": "raid_bdev1", 00:07:38.981 "core_mask": "0x1", 00:07:38.981 "workload": "randrw", 00:07:38.981 "percentage": 50, 00:07:38.981 "status": "finished", 00:07:38.981 "queue_depth": 1, 00:07:38.981 "io_size": 131072, 00:07:38.981 "runtime": 1.277277, 00:07:38.981 "iops": 14709.416986291932, 00:07:38.981 "mibps": 1838.6771232864915, 00:07:38.981 "io_failed": 1, 00:07:38.981 "io_timeout": 0, 00:07:38.981 "avg_latency_us": 92.99362949680051, 00:07:38.981 "min_latency_us": 33.28, 00:07:38.981 "max_latency_us": 1701.4153846153847 00:07:38.981 } 00:07:38.981 ], 00:07:38.981 "core_count": 1 00:07:38.981 } 00:07:38.981 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.981 14:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65844 00:07:38.981 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65844 ']' 00:07:38.981 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65844 00:07:38.981 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:38.981 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.981 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65844 00:07:38.982 killing process with pid 65844 00:07:38.982 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.982 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.982 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65844' 00:07:38.982 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65844 00:07:38.982 [2024-10-01 14:32:30.394685] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.982 14:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65844 00:07:38.982 [2024-10-01 14:32:30.538099] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.928 14:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oljziHVEUD 00:07:39.928 14:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:39.928 14:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:39.928 14:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.78 00:07:39.928 14:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:39.928 14:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.928 14:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.928 14:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.78 != \0\.\0\0 ]] 00:07:39.928 00:07:39.928 real 0m3.835s 00:07:39.928 user 0m4.551s 00:07:39.928 sys 0m0.403s 00:07:39.928 14:32:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.928 14:32:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.928 ************************************ 00:07:39.928 END TEST raid_write_error_test 00:07:39.928 ************************************ 00:07:39.928 14:32:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:39.928 14:32:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:07:39.928 14:32:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:39.928 14:32:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.928 14:32:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.928 ************************************ 00:07:39.928 START TEST raid_state_function_test 00:07:39.928 ************************************ 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:39.928 Process raid pid: 65982 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65982 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65982' 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65982 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65982 ']' 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.928 14:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:39.928 [2024-10-01 14:32:31.539142] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:39.928 [2024-10-01 14:32:31.539257] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.189 [2024-10-01 14:32:31.690821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.449 [2024-10-01 14:32:31.877471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.449 [2024-10-01 14:32:32.015505] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.449 [2024-10-01 14:32:32.015542] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.018 [2024-10-01 14:32:32.421593] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.018 [2024-10-01 14:32:32.421751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.018 [2024-10-01 14:32:32.421824] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.018 [2024-10-01 14:32:32.421852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.018 [2024-10-01 14:32:32.421874] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:41.018 [2024-10-01 14:32:32.421897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.018 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.018 "name": "Existed_Raid", 00:07:41.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.018 "strip_size_kb": 0, 00:07:41.018 "state": "configuring", 00:07:41.018 "raid_level": "raid1", 00:07:41.018 "superblock": false, 00:07:41.018 "num_base_bdevs": 3, 00:07:41.018 "num_base_bdevs_discovered": 0, 00:07:41.018 "num_base_bdevs_operational": 3, 00:07:41.018 "base_bdevs_list": [ 00:07:41.018 { 00:07:41.018 "name": "BaseBdev1", 00:07:41.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.018 "is_configured": false, 00:07:41.018 "data_offset": 0, 00:07:41.018 "data_size": 0 00:07:41.018 }, 00:07:41.018 { 00:07:41.018 "name": "BaseBdev2", 00:07:41.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.019 "is_configured": false, 00:07:41.019 "data_offset": 0, 00:07:41.019 "data_size": 0 00:07:41.019 }, 00:07:41.019 { 00:07:41.019 "name": "BaseBdev3", 00:07:41.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.019 "is_configured": false, 00:07:41.019 "data_offset": 0, 00:07:41.019 "data_size": 0 00:07:41.019 } 00:07:41.019 ] 00:07:41.019 }' 00:07:41.019 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.019 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.280 [2024-10-01 14:32:32.757610] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.280 [2024-10-01 14:32:32.757644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.280 [2024-10-01 14:32:32.765628] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.280 [2024-10-01 14:32:32.765670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.280 [2024-10-01 14:32:32.765678] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.280 [2024-10-01 14:32:32.765687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.280 [2024-10-01 14:32:32.765693] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:41.280 [2024-10-01 14:32:32.765702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.280 [2024-10-01 14:32:32.816976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.280 BaseBdev1 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.280 [ 00:07:41.280 { 00:07:41.280 "name": "BaseBdev1", 00:07:41.280 "aliases": [ 00:07:41.280 "5c97b5fc-b679-4c05-b747-86711a2c3bbd" 00:07:41.280 ], 00:07:41.280 "product_name": "Malloc disk", 00:07:41.280 "block_size": 512, 00:07:41.280 "num_blocks": 65536, 00:07:41.280 "uuid": "5c97b5fc-b679-4c05-b747-86711a2c3bbd", 00:07:41.280 "assigned_rate_limits": { 00:07:41.280 "rw_ios_per_sec": 0, 00:07:41.280 "rw_mbytes_per_sec": 0, 00:07:41.280 "r_mbytes_per_sec": 0, 00:07:41.280 "w_mbytes_per_sec": 0 00:07:41.280 }, 00:07:41.280 "claimed": true, 00:07:41.280 "claim_type": "exclusive_write", 00:07:41.280 "zoned": false, 00:07:41.280 "supported_io_types": { 00:07:41.280 "read": true, 00:07:41.280 "write": true, 00:07:41.280 "unmap": true, 00:07:41.280 "flush": true, 00:07:41.280 "reset": true, 00:07:41.280 "nvme_admin": false, 00:07:41.280 "nvme_io": false, 00:07:41.280 "nvme_io_md": false, 00:07:41.280 "write_zeroes": true, 00:07:41.280 "zcopy": true, 00:07:41.280 "get_zone_info": false, 00:07:41.280 "zone_management": false, 00:07:41.280 "zone_append": false, 00:07:41.280 "compare": false, 00:07:41.280 "compare_and_write": false, 00:07:41.280 "abort": true, 00:07:41.280 "seek_hole": false, 00:07:41.280 "seek_data": false, 00:07:41.280 "copy": true, 00:07:41.280 "nvme_iov_md": false 00:07:41.280 }, 00:07:41.280 "memory_domains": [ 00:07:41.280 { 00:07:41.280 "dma_device_id": "system", 00:07:41.280 "dma_device_type": 1 00:07:41.280 }, 00:07:41.280 { 00:07:41.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.280 "dma_device_type": 2 00:07:41.280 } 00:07:41.280 ], 00:07:41.280 "driver_specific": {} 00:07:41.280 } 00:07:41.280 ] 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:41.280 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.281 "name": "Existed_Raid", 00:07:41.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.281 "strip_size_kb": 0, 00:07:41.281 "state": "configuring", 00:07:41.281 "raid_level": "raid1", 00:07:41.281 "superblock": false, 00:07:41.281 "num_base_bdevs": 3, 00:07:41.281 "num_base_bdevs_discovered": 1, 00:07:41.281 "num_base_bdevs_operational": 3, 00:07:41.281 "base_bdevs_list": [ 00:07:41.281 { 00:07:41.281 "name": "BaseBdev1", 00:07:41.281 "uuid": "5c97b5fc-b679-4c05-b747-86711a2c3bbd", 00:07:41.281 "is_configured": true, 00:07:41.281 "data_offset": 0, 00:07:41.281 "data_size": 65536 00:07:41.281 }, 00:07:41.281 { 00:07:41.281 "name": "BaseBdev2", 00:07:41.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.281 "is_configured": false, 00:07:41.281 "data_offset": 0, 00:07:41.281 "data_size": 0 00:07:41.281 }, 00:07:41.281 { 00:07:41.281 "name": "BaseBdev3", 00:07:41.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.281 "is_configured": false, 00:07:41.281 "data_offset": 0, 00:07:41.281 "data_size": 0 00:07:41.281 } 00:07:41.281 ] 00:07:41.281 }' 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.281 14:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.541 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.541 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.541 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.802 [2024-10-01 14:32:33.225115] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.802 [2024-10-01 14:32:33.225164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.802 [2024-10-01 14:32:33.233141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.802 [2024-10-01 14:32:33.234988] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.802 [2024-10-01 14:32:33.235029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.802 [2024-10-01 14:32:33.235039] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:41.802 [2024-10-01 14:32:33.235049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.802 "name": "Existed_Raid", 00:07:41.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.802 "strip_size_kb": 0, 00:07:41.802 "state": "configuring", 00:07:41.802 "raid_level": "raid1", 00:07:41.802 "superblock": false, 00:07:41.802 "num_base_bdevs": 3, 00:07:41.802 "num_base_bdevs_discovered": 1, 00:07:41.802 "num_base_bdevs_operational": 3, 00:07:41.802 "base_bdevs_list": [ 00:07:41.802 { 00:07:41.802 "name": "BaseBdev1", 00:07:41.802 "uuid": "5c97b5fc-b679-4c05-b747-86711a2c3bbd", 00:07:41.802 "is_configured": true, 00:07:41.802 "data_offset": 0, 00:07:41.802 "data_size": 65536 00:07:41.802 }, 00:07:41.802 { 00:07:41.802 "name": "BaseBdev2", 00:07:41.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.802 "is_configured": false, 00:07:41.802 "data_offset": 0, 00:07:41.802 "data_size": 0 00:07:41.802 }, 00:07:41.802 { 00:07:41.802 "name": "BaseBdev3", 00:07:41.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.802 "is_configured": false, 00:07:41.802 "data_offset": 0, 00:07:41.802 "data_size": 0 00:07:41.802 } 00:07:41.802 ] 00:07:41.802 }' 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.802 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.063 [2024-10-01 14:32:33.587923] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.063 BaseBdev2 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.063 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.063 [ 00:07:42.063 { 00:07:42.063 "name": "BaseBdev2", 00:07:42.063 "aliases": [ 00:07:42.063 "4e850843-09a6-4d35-b01a-7651f36cec1c" 00:07:42.063 ], 00:07:42.063 "product_name": "Malloc disk", 00:07:42.063 "block_size": 512, 00:07:42.064 "num_blocks": 65536, 00:07:42.064 "uuid": "4e850843-09a6-4d35-b01a-7651f36cec1c", 00:07:42.064 "assigned_rate_limits": { 00:07:42.064 "rw_ios_per_sec": 0, 00:07:42.064 "rw_mbytes_per_sec": 0, 00:07:42.064 "r_mbytes_per_sec": 0, 00:07:42.064 "w_mbytes_per_sec": 0 00:07:42.064 }, 00:07:42.064 "claimed": true, 00:07:42.064 "claim_type": "exclusive_write", 00:07:42.064 "zoned": false, 00:07:42.064 "supported_io_types": { 00:07:42.064 "read": true, 00:07:42.064 "write": true, 00:07:42.064 "unmap": true, 00:07:42.064 "flush": true, 00:07:42.064 "reset": true, 00:07:42.064 "nvme_admin": false, 00:07:42.064 "nvme_io": false, 00:07:42.064 "nvme_io_md": false, 00:07:42.064 "write_zeroes": true, 00:07:42.064 "zcopy": true, 00:07:42.064 "get_zone_info": false, 00:07:42.064 "zone_management": false, 00:07:42.064 "zone_append": false, 00:07:42.064 "compare": false, 00:07:42.064 "compare_and_write": false, 00:07:42.064 "abort": true, 00:07:42.064 "seek_hole": false, 00:07:42.064 "seek_data": false, 00:07:42.064 "copy": true, 00:07:42.064 "nvme_iov_md": false 00:07:42.064 }, 00:07:42.064 "memory_domains": [ 00:07:42.064 { 00:07:42.064 "dma_device_id": "system", 00:07:42.064 "dma_device_type": 1 00:07:42.064 }, 00:07:42.064 { 00:07:42.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.064 "dma_device_type": 2 00:07:42.064 } 00:07:42.064 ], 00:07:42.064 "driver_specific": {} 00:07:42.064 } 00:07:42.064 ] 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.064 "name": "Existed_Raid", 00:07:42.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.064 "strip_size_kb": 0, 00:07:42.064 "state": "configuring", 00:07:42.064 "raid_level": "raid1", 00:07:42.064 "superblock": false, 00:07:42.064 "num_base_bdevs": 3, 00:07:42.064 "num_base_bdevs_discovered": 2, 00:07:42.064 "num_base_bdevs_operational": 3, 00:07:42.064 "base_bdevs_list": [ 00:07:42.064 { 00:07:42.064 "name": "BaseBdev1", 00:07:42.064 "uuid": "5c97b5fc-b679-4c05-b747-86711a2c3bbd", 00:07:42.064 "is_configured": true, 00:07:42.064 "data_offset": 0, 00:07:42.064 "data_size": 65536 00:07:42.064 }, 00:07:42.064 { 00:07:42.064 "name": "BaseBdev2", 00:07:42.064 "uuid": "4e850843-09a6-4d35-b01a-7651f36cec1c", 00:07:42.064 "is_configured": true, 00:07:42.064 "data_offset": 0, 00:07:42.064 "data_size": 65536 00:07:42.064 }, 00:07:42.064 { 00:07:42.064 "name": "BaseBdev3", 00:07:42.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.064 "is_configured": false, 00:07:42.064 "data_offset": 0, 00:07:42.064 "data_size": 0 00:07:42.064 } 00:07:42.064 ] 00:07:42.064 }' 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.064 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.326 14:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:42.326 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.326 14:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.588 [2024-10-01 14:32:34.027030] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:42.588 [2024-10-01 14:32:34.027234] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:42.588 [2024-10-01 14:32:34.027279] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:42.588 [2024-10-01 14:32:34.027929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:42.588 [2024-10-01 14:32:34.028191] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:42.588 [2024-10-01 14:32:34.028226] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:42.588 [2024-10-01 14:32:34.028587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.588 BaseBdev3 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.588 [ 00:07:42.588 { 00:07:42.588 "name": "BaseBdev3", 00:07:42.588 "aliases": [ 00:07:42.588 "fb8655b1-28ad-4988-8751-3b347dde8d1b" 00:07:42.588 ], 00:07:42.588 "product_name": "Malloc disk", 00:07:42.588 "block_size": 512, 00:07:42.588 "num_blocks": 65536, 00:07:42.588 "uuid": "fb8655b1-28ad-4988-8751-3b347dde8d1b", 00:07:42.588 "assigned_rate_limits": { 00:07:42.588 "rw_ios_per_sec": 0, 00:07:42.588 "rw_mbytes_per_sec": 0, 00:07:42.588 "r_mbytes_per_sec": 0, 00:07:42.588 "w_mbytes_per_sec": 0 00:07:42.588 }, 00:07:42.588 "claimed": true, 00:07:42.588 "claim_type": "exclusive_write", 00:07:42.588 "zoned": false, 00:07:42.588 "supported_io_types": { 00:07:42.588 "read": true, 00:07:42.588 "write": true, 00:07:42.588 "unmap": true, 00:07:42.588 "flush": true, 00:07:42.588 "reset": true, 00:07:42.588 "nvme_admin": false, 00:07:42.588 "nvme_io": false, 00:07:42.588 "nvme_io_md": false, 00:07:42.588 "write_zeroes": true, 00:07:42.588 "zcopy": true, 00:07:42.588 "get_zone_info": false, 00:07:42.588 "zone_management": false, 00:07:42.588 "zone_append": false, 00:07:42.588 "compare": false, 00:07:42.588 "compare_and_write": false, 00:07:42.588 "abort": true, 00:07:42.588 "seek_hole": false, 00:07:42.588 "seek_data": false, 00:07:42.588 "copy": true, 00:07:42.588 "nvme_iov_md": false 00:07:42.588 }, 00:07:42.588 "memory_domains": [ 00:07:42.588 { 00:07:42.588 "dma_device_id": "system", 00:07:42.588 "dma_device_type": 1 00:07:42.588 }, 00:07:42.588 { 00:07:42.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.588 "dma_device_type": 2 00:07:42.588 } 00:07:42.588 ], 00:07:42.588 "driver_specific": {} 00:07:42.588 } 00:07:42.588 ] 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:42.588 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.589 "name": "Existed_Raid", 00:07:42.589 "uuid": "101413a8-ce50-4f77-8684-9efa1844ca36", 00:07:42.589 "strip_size_kb": 0, 00:07:42.589 "state": "online", 00:07:42.589 "raid_level": "raid1", 00:07:42.589 "superblock": false, 00:07:42.589 "num_base_bdevs": 3, 00:07:42.589 "num_base_bdevs_discovered": 3, 00:07:42.589 "num_base_bdevs_operational": 3, 00:07:42.589 "base_bdevs_list": [ 00:07:42.589 { 00:07:42.589 "name": "BaseBdev1", 00:07:42.589 "uuid": "5c97b5fc-b679-4c05-b747-86711a2c3bbd", 00:07:42.589 "is_configured": true, 00:07:42.589 "data_offset": 0, 00:07:42.589 "data_size": 65536 00:07:42.589 }, 00:07:42.589 { 00:07:42.589 "name": "BaseBdev2", 00:07:42.589 "uuid": "4e850843-09a6-4d35-b01a-7651f36cec1c", 00:07:42.589 "is_configured": true, 00:07:42.589 "data_offset": 0, 00:07:42.589 "data_size": 65536 00:07:42.589 }, 00:07:42.589 { 00:07:42.589 "name": "BaseBdev3", 00:07:42.589 "uuid": "fb8655b1-28ad-4988-8751-3b347dde8d1b", 00:07:42.589 "is_configured": true, 00:07:42.589 "data_offset": 0, 00:07:42.589 "data_size": 65536 00:07:42.589 } 00:07:42.589 ] 00:07:42.589 }' 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.589 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.851 [2024-10-01 14:32:34.367512] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:42.851 "name": "Existed_Raid", 00:07:42.851 "aliases": [ 00:07:42.851 "101413a8-ce50-4f77-8684-9efa1844ca36" 00:07:42.851 ], 00:07:42.851 "product_name": "Raid Volume", 00:07:42.851 "block_size": 512, 00:07:42.851 "num_blocks": 65536, 00:07:42.851 "uuid": "101413a8-ce50-4f77-8684-9efa1844ca36", 00:07:42.851 "assigned_rate_limits": { 00:07:42.851 "rw_ios_per_sec": 0, 00:07:42.851 "rw_mbytes_per_sec": 0, 00:07:42.851 "r_mbytes_per_sec": 0, 00:07:42.851 "w_mbytes_per_sec": 0 00:07:42.851 }, 00:07:42.851 "claimed": false, 00:07:42.851 "zoned": false, 00:07:42.851 "supported_io_types": { 00:07:42.851 "read": true, 00:07:42.851 "write": true, 00:07:42.851 "unmap": false, 00:07:42.851 "flush": false, 00:07:42.851 "reset": true, 00:07:42.851 "nvme_admin": false, 00:07:42.851 "nvme_io": false, 00:07:42.851 "nvme_io_md": false, 00:07:42.851 "write_zeroes": true, 00:07:42.851 "zcopy": false, 00:07:42.851 "get_zone_info": false, 00:07:42.851 "zone_management": false, 00:07:42.851 "zone_append": false, 00:07:42.851 "compare": false, 00:07:42.851 "compare_and_write": false, 00:07:42.851 "abort": false, 00:07:42.851 "seek_hole": false, 00:07:42.851 "seek_data": false, 00:07:42.851 "copy": false, 00:07:42.851 "nvme_iov_md": false 00:07:42.851 }, 00:07:42.851 "memory_domains": [ 00:07:42.851 { 00:07:42.851 "dma_device_id": "system", 00:07:42.851 "dma_device_type": 1 00:07:42.851 }, 00:07:42.851 { 00:07:42.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.851 "dma_device_type": 2 00:07:42.851 }, 00:07:42.851 { 00:07:42.851 "dma_device_id": "system", 00:07:42.851 "dma_device_type": 1 00:07:42.851 }, 00:07:42.851 { 00:07:42.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.851 "dma_device_type": 2 00:07:42.851 }, 00:07:42.851 { 00:07:42.851 "dma_device_id": "system", 00:07:42.851 "dma_device_type": 1 00:07:42.851 }, 00:07:42.851 { 00:07:42.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.851 "dma_device_type": 2 00:07:42.851 } 00:07:42.851 ], 00:07:42.851 "driver_specific": { 00:07:42.851 "raid": { 00:07:42.851 "uuid": "101413a8-ce50-4f77-8684-9efa1844ca36", 00:07:42.851 "strip_size_kb": 0, 00:07:42.851 "state": "online", 00:07:42.851 "raid_level": "raid1", 00:07:42.851 "superblock": false, 00:07:42.851 "num_base_bdevs": 3, 00:07:42.851 "num_base_bdevs_discovered": 3, 00:07:42.851 "num_base_bdevs_operational": 3, 00:07:42.851 "base_bdevs_list": [ 00:07:42.851 { 00:07:42.851 "name": "BaseBdev1", 00:07:42.851 "uuid": "5c97b5fc-b679-4c05-b747-86711a2c3bbd", 00:07:42.851 "is_configured": true, 00:07:42.851 "data_offset": 0, 00:07:42.851 "data_size": 65536 00:07:42.851 }, 00:07:42.851 { 00:07:42.851 "name": "BaseBdev2", 00:07:42.851 "uuid": "4e850843-09a6-4d35-b01a-7651f36cec1c", 00:07:42.851 "is_configured": true, 00:07:42.851 "data_offset": 0, 00:07:42.851 "data_size": 65536 00:07:42.851 }, 00:07:42.851 { 00:07:42.851 "name": "BaseBdev3", 00:07:42.851 "uuid": "fb8655b1-28ad-4988-8751-3b347dde8d1b", 00:07:42.851 "is_configured": true, 00:07:42.851 "data_offset": 0, 00:07:42.851 "data_size": 65536 00:07:42.851 } 00:07:42.851 ] 00:07:42.851 } 00:07:42.851 } 00:07:42.851 }' 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:42.851 BaseBdev2 00:07:42.851 BaseBdev3' 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.851 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.852 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:42.852 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.852 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.852 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.852 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.852 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.852 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.852 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.852 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:42.852 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.852 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.116 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.116 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.116 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.116 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:43.116 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.116 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.116 [2024-10-01 14:32:34.551238] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.117 "name": "Existed_Raid", 00:07:43.117 "uuid": "101413a8-ce50-4f77-8684-9efa1844ca36", 00:07:43.117 "strip_size_kb": 0, 00:07:43.117 "state": "online", 00:07:43.117 "raid_level": "raid1", 00:07:43.117 "superblock": false, 00:07:43.117 "num_base_bdevs": 3, 00:07:43.117 "num_base_bdevs_discovered": 2, 00:07:43.117 "num_base_bdevs_operational": 2, 00:07:43.117 "base_bdevs_list": [ 00:07:43.117 { 00:07:43.117 "name": null, 00:07:43.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.117 "is_configured": false, 00:07:43.117 "data_offset": 0, 00:07:43.117 "data_size": 65536 00:07:43.117 }, 00:07:43.117 { 00:07:43.117 "name": "BaseBdev2", 00:07:43.117 "uuid": "4e850843-09a6-4d35-b01a-7651f36cec1c", 00:07:43.117 "is_configured": true, 00:07:43.117 "data_offset": 0, 00:07:43.117 "data_size": 65536 00:07:43.117 }, 00:07:43.117 { 00:07:43.117 "name": "BaseBdev3", 00:07:43.117 "uuid": "fb8655b1-28ad-4988-8751-3b347dde8d1b", 00:07:43.117 "is_configured": true, 00:07:43.117 "data_offset": 0, 00:07:43.117 "data_size": 65536 00:07:43.117 } 00:07:43.117 ] 00:07:43.117 }' 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.117 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.377 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:43.377 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.377 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.377 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.377 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.377 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.377 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.377 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.377 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.377 14:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:43.377 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.377 14:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.377 [2024-10-01 14:32:34.986730] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:43.377 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.377 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.377 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.377 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.377 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.377 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.377 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.637 [2024-10-01 14:32:35.088781] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:43.637 [2024-10-01 14:32:35.088960] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.637 [2024-10-01 14:32:35.148489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.637 [2024-10-01 14:32:35.148658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.637 [2024-10-01 14:32:35.148677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.637 BaseBdev2 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:43.637 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.638 [ 00:07:43.638 { 00:07:43.638 "name": "BaseBdev2", 00:07:43.638 "aliases": [ 00:07:43.638 "c287bbcb-2c86-400d-8657-2c88be3f9979" 00:07:43.638 ], 00:07:43.638 "product_name": "Malloc disk", 00:07:43.638 "block_size": 512, 00:07:43.638 "num_blocks": 65536, 00:07:43.638 "uuid": "c287bbcb-2c86-400d-8657-2c88be3f9979", 00:07:43.638 "assigned_rate_limits": { 00:07:43.638 "rw_ios_per_sec": 0, 00:07:43.638 "rw_mbytes_per_sec": 0, 00:07:43.638 "r_mbytes_per_sec": 0, 00:07:43.638 "w_mbytes_per_sec": 0 00:07:43.638 }, 00:07:43.638 "claimed": false, 00:07:43.638 "zoned": false, 00:07:43.638 "supported_io_types": { 00:07:43.638 "read": true, 00:07:43.638 "write": true, 00:07:43.638 "unmap": true, 00:07:43.638 "flush": true, 00:07:43.638 "reset": true, 00:07:43.638 "nvme_admin": false, 00:07:43.638 "nvme_io": false, 00:07:43.638 "nvme_io_md": false, 00:07:43.638 "write_zeroes": true, 00:07:43.638 "zcopy": true, 00:07:43.638 "get_zone_info": false, 00:07:43.638 "zone_management": false, 00:07:43.638 "zone_append": false, 00:07:43.638 "compare": false, 00:07:43.638 "compare_and_write": false, 00:07:43.638 "abort": true, 00:07:43.638 "seek_hole": false, 00:07:43.638 "seek_data": false, 00:07:43.638 "copy": true, 00:07:43.638 "nvme_iov_md": false 00:07:43.638 }, 00:07:43.638 "memory_domains": [ 00:07:43.638 { 00:07:43.638 "dma_device_id": "system", 00:07:43.638 "dma_device_type": 1 00:07:43.638 }, 00:07:43.638 { 00:07:43.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.638 "dma_device_type": 2 00:07:43.638 } 00:07:43.638 ], 00:07:43.638 "driver_specific": {} 00:07:43.638 } 00:07:43.638 ] 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.638 BaseBdev3 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.638 [ 00:07:43.638 { 00:07:43.638 "name": "BaseBdev3", 00:07:43.638 "aliases": [ 00:07:43.638 "e109b2df-c58f-4c87-90cf-4958468d19de" 00:07:43.638 ], 00:07:43.638 "product_name": "Malloc disk", 00:07:43.638 "block_size": 512, 00:07:43.638 "num_blocks": 65536, 00:07:43.638 "uuid": "e109b2df-c58f-4c87-90cf-4958468d19de", 00:07:43.638 "assigned_rate_limits": { 00:07:43.638 "rw_ios_per_sec": 0, 00:07:43.638 "rw_mbytes_per_sec": 0, 00:07:43.638 "r_mbytes_per_sec": 0, 00:07:43.638 "w_mbytes_per_sec": 0 00:07:43.638 }, 00:07:43.638 "claimed": false, 00:07:43.638 "zoned": false, 00:07:43.638 "supported_io_types": { 00:07:43.638 "read": true, 00:07:43.638 "write": true, 00:07:43.638 "unmap": true, 00:07:43.638 "flush": true, 00:07:43.638 "reset": true, 00:07:43.638 "nvme_admin": false, 00:07:43.638 "nvme_io": false, 00:07:43.638 "nvme_io_md": false, 00:07:43.638 "write_zeroes": true, 00:07:43.638 "zcopy": true, 00:07:43.638 "get_zone_info": false, 00:07:43.638 "zone_management": false, 00:07:43.638 "zone_append": false, 00:07:43.638 "compare": false, 00:07:43.638 "compare_and_write": false, 00:07:43.638 "abort": true, 00:07:43.638 "seek_hole": false, 00:07:43.638 "seek_data": false, 00:07:43.638 "copy": true, 00:07:43.638 "nvme_iov_md": false 00:07:43.638 }, 00:07:43.638 "memory_domains": [ 00:07:43.638 { 00:07:43.638 "dma_device_id": "system", 00:07:43.638 "dma_device_type": 1 00:07:43.638 }, 00:07:43.638 { 00:07:43.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.638 "dma_device_type": 2 00:07:43.638 } 00:07:43.638 ], 00:07:43.638 "driver_specific": {} 00:07:43.638 } 00:07:43.638 ] 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.638 [2024-10-01 14:32:35.304045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:43.638 [2024-10-01 14:32:35.304182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:43.638 [2024-10-01 14:32:35.304250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.638 [2024-10-01 14:32:35.306149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.638 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.897 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.897 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.897 "name": "Existed_Raid", 00:07:43.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.897 "strip_size_kb": 0, 00:07:43.898 "state": "configuring", 00:07:43.898 "raid_level": "raid1", 00:07:43.898 "superblock": false, 00:07:43.898 "num_base_bdevs": 3, 00:07:43.898 "num_base_bdevs_discovered": 2, 00:07:43.898 "num_base_bdevs_operational": 3, 00:07:43.898 "base_bdevs_list": [ 00:07:43.898 { 00:07:43.898 "name": "BaseBdev1", 00:07:43.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.898 "is_configured": false, 00:07:43.898 "data_offset": 0, 00:07:43.898 "data_size": 0 00:07:43.898 }, 00:07:43.898 { 00:07:43.898 "name": "BaseBdev2", 00:07:43.898 "uuid": "c287bbcb-2c86-400d-8657-2c88be3f9979", 00:07:43.898 "is_configured": true, 00:07:43.898 "data_offset": 0, 00:07:43.898 "data_size": 65536 00:07:43.898 }, 00:07:43.898 { 00:07:43.898 "name": "BaseBdev3", 00:07:43.898 "uuid": "e109b2df-c58f-4c87-90cf-4958468d19de", 00:07:43.898 "is_configured": true, 00:07:43.898 "data_offset": 0, 00:07:43.898 "data_size": 65536 00:07:43.898 } 00:07:43.898 ] 00:07:43.898 }' 00:07:43.898 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.898 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.159 [2024-10-01 14:32:35.640110] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.159 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.159 "name": "Existed_Raid", 00:07:44.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.159 "strip_size_kb": 0, 00:07:44.159 "state": "configuring", 00:07:44.159 "raid_level": "raid1", 00:07:44.159 "superblock": false, 00:07:44.159 "num_base_bdevs": 3, 00:07:44.159 "num_base_bdevs_discovered": 1, 00:07:44.159 "num_base_bdevs_operational": 3, 00:07:44.159 "base_bdevs_list": [ 00:07:44.159 { 00:07:44.159 "name": "BaseBdev1", 00:07:44.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.159 "is_configured": false, 00:07:44.160 "data_offset": 0, 00:07:44.160 "data_size": 0 00:07:44.160 }, 00:07:44.160 { 00:07:44.160 "name": null, 00:07:44.160 "uuid": "c287bbcb-2c86-400d-8657-2c88be3f9979", 00:07:44.160 "is_configured": false, 00:07:44.160 "data_offset": 0, 00:07:44.160 "data_size": 65536 00:07:44.160 }, 00:07:44.160 { 00:07:44.160 "name": "BaseBdev3", 00:07:44.160 "uuid": "e109b2df-c58f-4c87-90cf-4958468d19de", 00:07:44.160 "is_configured": true, 00:07:44.160 "data_offset": 0, 00:07:44.160 "data_size": 65536 00:07:44.160 } 00:07:44.160 ] 00:07:44.160 }' 00:07:44.160 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.160 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.421 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:44.421 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.421 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.421 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.421 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.421 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:44.421 14:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.421 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.421 14:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.421 [2024-10-01 14:32:36.022394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.421 BaseBdev1 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.421 [ 00:07:44.421 { 00:07:44.421 "name": "BaseBdev1", 00:07:44.421 "aliases": [ 00:07:44.421 "af26abb6-f394-4d02-b740-b40f15bc8404" 00:07:44.421 ], 00:07:44.421 "product_name": "Malloc disk", 00:07:44.421 "block_size": 512, 00:07:44.421 "num_blocks": 65536, 00:07:44.421 "uuid": "af26abb6-f394-4d02-b740-b40f15bc8404", 00:07:44.421 "assigned_rate_limits": { 00:07:44.421 "rw_ios_per_sec": 0, 00:07:44.421 "rw_mbytes_per_sec": 0, 00:07:44.421 "r_mbytes_per_sec": 0, 00:07:44.421 "w_mbytes_per_sec": 0 00:07:44.421 }, 00:07:44.421 "claimed": true, 00:07:44.421 "claim_type": "exclusive_write", 00:07:44.421 "zoned": false, 00:07:44.421 "supported_io_types": { 00:07:44.421 "read": true, 00:07:44.421 "write": true, 00:07:44.421 "unmap": true, 00:07:44.421 "flush": true, 00:07:44.421 "reset": true, 00:07:44.421 "nvme_admin": false, 00:07:44.421 "nvme_io": false, 00:07:44.421 "nvme_io_md": false, 00:07:44.421 "write_zeroes": true, 00:07:44.421 "zcopy": true, 00:07:44.421 "get_zone_info": false, 00:07:44.421 "zone_management": false, 00:07:44.421 "zone_append": false, 00:07:44.421 "compare": false, 00:07:44.421 "compare_and_write": false, 00:07:44.421 "abort": true, 00:07:44.421 "seek_hole": false, 00:07:44.421 "seek_data": false, 00:07:44.421 "copy": true, 00:07:44.421 "nvme_iov_md": false 00:07:44.421 }, 00:07:44.421 "memory_domains": [ 00:07:44.421 { 00:07:44.421 "dma_device_id": "system", 00:07:44.421 "dma_device_type": 1 00:07:44.421 }, 00:07:44.421 { 00:07:44.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.421 "dma_device_type": 2 00:07:44.421 } 00:07:44.421 ], 00:07:44.421 "driver_specific": {} 00:07:44.421 } 00:07:44.421 ] 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.421 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.421 "name": "Existed_Raid", 00:07:44.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.421 "strip_size_kb": 0, 00:07:44.421 "state": "configuring", 00:07:44.421 "raid_level": "raid1", 00:07:44.421 "superblock": false, 00:07:44.421 "num_base_bdevs": 3, 00:07:44.421 "num_base_bdevs_discovered": 2, 00:07:44.421 "num_base_bdevs_operational": 3, 00:07:44.421 "base_bdevs_list": [ 00:07:44.421 { 00:07:44.421 "name": "BaseBdev1", 00:07:44.421 "uuid": "af26abb6-f394-4d02-b740-b40f15bc8404", 00:07:44.421 "is_configured": true, 00:07:44.421 "data_offset": 0, 00:07:44.421 "data_size": 65536 00:07:44.421 }, 00:07:44.421 { 00:07:44.421 "name": null, 00:07:44.421 "uuid": "c287bbcb-2c86-400d-8657-2c88be3f9979", 00:07:44.421 "is_configured": false, 00:07:44.421 "data_offset": 0, 00:07:44.421 "data_size": 65536 00:07:44.421 }, 00:07:44.421 { 00:07:44.421 "name": "BaseBdev3", 00:07:44.421 "uuid": "e109b2df-c58f-4c87-90cf-4958468d19de", 00:07:44.421 "is_configured": true, 00:07:44.421 "data_offset": 0, 00:07:44.421 "data_size": 65536 00:07:44.422 } 00:07:44.422 ] 00:07:44.422 }' 00:07:44.422 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.422 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.994 [2024-10-01 14:32:36.430552] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.994 "name": "Existed_Raid", 00:07:44.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.994 "strip_size_kb": 0, 00:07:44.994 "state": "configuring", 00:07:44.994 "raid_level": "raid1", 00:07:44.994 "superblock": false, 00:07:44.994 "num_base_bdevs": 3, 00:07:44.994 "num_base_bdevs_discovered": 1, 00:07:44.994 "num_base_bdevs_operational": 3, 00:07:44.994 "base_bdevs_list": [ 00:07:44.994 { 00:07:44.994 "name": "BaseBdev1", 00:07:44.994 "uuid": "af26abb6-f394-4d02-b740-b40f15bc8404", 00:07:44.994 "is_configured": true, 00:07:44.994 "data_offset": 0, 00:07:44.994 "data_size": 65536 00:07:44.994 }, 00:07:44.994 { 00:07:44.994 "name": null, 00:07:44.994 "uuid": "c287bbcb-2c86-400d-8657-2c88be3f9979", 00:07:44.994 "is_configured": false, 00:07:44.994 "data_offset": 0, 00:07:44.994 "data_size": 65536 00:07:44.994 }, 00:07:44.994 { 00:07:44.994 "name": null, 00:07:44.994 "uuid": "e109b2df-c58f-4c87-90cf-4958468d19de", 00:07:44.994 "is_configured": false, 00:07:44.994 "data_offset": 0, 00:07:44.994 "data_size": 65536 00:07:44.994 } 00:07:44.994 ] 00:07:44.994 }' 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.994 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.255 [2024-10-01 14:32:36.778642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.255 "name": "Existed_Raid", 00:07:45.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.255 "strip_size_kb": 0, 00:07:45.255 "state": "configuring", 00:07:45.255 "raid_level": "raid1", 00:07:45.255 "superblock": false, 00:07:45.255 "num_base_bdevs": 3, 00:07:45.255 "num_base_bdevs_discovered": 2, 00:07:45.255 "num_base_bdevs_operational": 3, 00:07:45.255 "base_bdevs_list": [ 00:07:45.255 { 00:07:45.255 "name": "BaseBdev1", 00:07:45.255 "uuid": "af26abb6-f394-4d02-b740-b40f15bc8404", 00:07:45.255 "is_configured": true, 00:07:45.255 "data_offset": 0, 00:07:45.255 "data_size": 65536 00:07:45.255 }, 00:07:45.255 { 00:07:45.255 "name": null, 00:07:45.255 "uuid": "c287bbcb-2c86-400d-8657-2c88be3f9979", 00:07:45.255 "is_configured": false, 00:07:45.255 "data_offset": 0, 00:07:45.255 "data_size": 65536 00:07:45.255 }, 00:07:45.255 { 00:07:45.255 "name": "BaseBdev3", 00:07:45.255 "uuid": "e109b2df-c58f-4c87-90cf-4958468d19de", 00:07:45.255 "is_configured": true, 00:07:45.255 "data_offset": 0, 00:07:45.255 "data_size": 65536 00:07:45.255 } 00:07:45.255 ] 00:07:45.255 }' 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.255 14:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.517 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.517 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:45.517 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.517 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.517 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.517 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:45.517 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:45.517 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.517 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.517 [2024-10-01 14:32:37.146781] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.779 "name": "Existed_Raid", 00:07:45.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.779 "strip_size_kb": 0, 00:07:45.779 "state": "configuring", 00:07:45.779 "raid_level": "raid1", 00:07:45.779 "superblock": false, 00:07:45.779 "num_base_bdevs": 3, 00:07:45.779 "num_base_bdevs_discovered": 1, 00:07:45.779 "num_base_bdevs_operational": 3, 00:07:45.779 "base_bdevs_list": [ 00:07:45.779 { 00:07:45.779 "name": null, 00:07:45.779 "uuid": "af26abb6-f394-4d02-b740-b40f15bc8404", 00:07:45.779 "is_configured": false, 00:07:45.779 "data_offset": 0, 00:07:45.779 "data_size": 65536 00:07:45.779 }, 00:07:45.779 { 00:07:45.779 "name": null, 00:07:45.779 "uuid": "c287bbcb-2c86-400d-8657-2c88be3f9979", 00:07:45.779 "is_configured": false, 00:07:45.779 "data_offset": 0, 00:07:45.779 "data_size": 65536 00:07:45.779 }, 00:07:45.779 { 00:07:45.779 "name": "BaseBdev3", 00:07:45.779 "uuid": "e109b2df-c58f-4c87-90cf-4958468d19de", 00:07:45.779 "is_configured": true, 00:07:45.779 "data_offset": 0, 00:07:45.779 "data_size": 65536 00:07:45.779 } 00:07:45.779 ] 00:07:45.779 }' 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.779 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.040 [2024-10-01 14:32:37.561067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.040 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.040 "name": "Existed_Raid", 00:07:46.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.041 "strip_size_kb": 0, 00:07:46.041 "state": "configuring", 00:07:46.041 "raid_level": "raid1", 00:07:46.041 "superblock": false, 00:07:46.041 "num_base_bdevs": 3, 00:07:46.041 "num_base_bdevs_discovered": 2, 00:07:46.041 "num_base_bdevs_operational": 3, 00:07:46.041 "base_bdevs_list": [ 00:07:46.041 { 00:07:46.041 "name": null, 00:07:46.041 "uuid": "af26abb6-f394-4d02-b740-b40f15bc8404", 00:07:46.041 "is_configured": false, 00:07:46.041 "data_offset": 0, 00:07:46.041 "data_size": 65536 00:07:46.041 }, 00:07:46.041 { 00:07:46.041 "name": "BaseBdev2", 00:07:46.041 "uuid": "c287bbcb-2c86-400d-8657-2c88be3f9979", 00:07:46.041 "is_configured": true, 00:07:46.041 "data_offset": 0, 00:07:46.041 "data_size": 65536 00:07:46.041 }, 00:07:46.041 { 00:07:46.041 "name": "BaseBdev3", 00:07:46.041 "uuid": "e109b2df-c58f-4c87-90cf-4958468d19de", 00:07:46.041 "is_configured": true, 00:07:46.041 "data_offset": 0, 00:07:46.041 "data_size": 65536 00:07:46.041 } 00:07:46.041 ] 00:07:46.041 }' 00:07:46.041 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.041 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u af26abb6-f394-4d02-b740-b40f15bc8404 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.303 [2024-10-01 14:32:37.967350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:46.303 [2024-10-01 14:32:37.967398] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:46.303 [2024-10-01 14:32:37.967405] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:46.303 [2024-10-01 14:32:37.967648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:46.303 [2024-10-01 14:32:37.967807] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:46.303 [2024-10-01 14:32:37.967819] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:46.303 [2024-10-01 14:32:37.968051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.303 NewBaseBdev 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.303 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.564 [ 00:07:46.564 { 00:07:46.564 "name": "NewBaseBdev", 00:07:46.564 "aliases": [ 00:07:46.564 "af26abb6-f394-4d02-b740-b40f15bc8404" 00:07:46.564 ], 00:07:46.564 "product_name": "Malloc disk", 00:07:46.564 "block_size": 512, 00:07:46.564 "num_blocks": 65536, 00:07:46.564 "uuid": "af26abb6-f394-4d02-b740-b40f15bc8404", 00:07:46.564 "assigned_rate_limits": { 00:07:46.564 "rw_ios_per_sec": 0, 00:07:46.564 "rw_mbytes_per_sec": 0, 00:07:46.564 "r_mbytes_per_sec": 0, 00:07:46.564 "w_mbytes_per_sec": 0 00:07:46.564 }, 00:07:46.564 "claimed": true, 00:07:46.564 "claim_type": "exclusive_write", 00:07:46.564 "zoned": false, 00:07:46.564 "supported_io_types": { 00:07:46.564 "read": true, 00:07:46.564 "write": true, 00:07:46.564 "unmap": true, 00:07:46.564 "flush": true, 00:07:46.564 "reset": true, 00:07:46.564 "nvme_admin": false, 00:07:46.564 "nvme_io": false, 00:07:46.564 "nvme_io_md": false, 00:07:46.564 "write_zeroes": true, 00:07:46.564 "zcopy": true, 00:07:46.564 "get_zone_info": false, 00:07:46.564 "zone_management": false, 00:07:46.564 "zone_append": false, 00:07:46.564 "compare": false, 00:07:46.564 "compare_and_write": false, 00:07:46.564 "abort": true, 00:07:46.564 "seek_hole": false, 00:07:46.564 "seek_data": false, 00:07:46.564 "copy": true, 00:07:46.564 "nvme_iov_md": false 00:07:46.564 }, 00:07:46.564 "memory_domains": [ 00:07:46.564 { 00:07:46.564 "dma_device_id": "system", 00:07:46.564 "dma_device_type": 1 00:07:46.564 }, 00:07:46.564 { 00:07:46.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.564 "dma_device_type": 2 00:07:46.564 } 00:07:46.564 ], 00:07:46.564 "driver_specific": {} 00:07:46.564 } 00:07:46.564 ] 00:07:46.564 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.564 14:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:46.564 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:07:46.564 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.564 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.564 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.564 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.564 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.565 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.565 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.565 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.565 14:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.565 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.565 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.565 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.565 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.565 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.565 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.565 "name": "Existed_Raid", 00:07:46.565 "uuid": "464f6358-eba4-4369-80e1-9f75c027977b", 00:07:46.565 "strip_size_kb": 0, 00:07:46.565 "state": "online", 00:07:46.565 "raid_level": "raid1", 00:07:46.565 "superblock": false, 00:07:46.565 "num_base_bdevs": 3, 00:07:46.565 "num_base_bdevs_discovered": 3, 00:07:46.565 "num_base_bdevs_operational": 3, 00:07:46.565 "base_bdevs_list": [ 00:07:46.565 { 00:07:46.565 "name": "NewBaseBdev", 00:07:46.565 "uuid": "af26abb6-f394-4d02-b740-b40f15bc8404", 00:07:46.565 "is_configured": true, 00:07:46.565 "data_offset": 0, 00:07:46.565 "data_size": 65536 00:07:46.565 }, 00:07:46.565 { 00:07:46.565 "name": "BaseBdev2", 00:07:46.565 "uuid": "c287bbcb-2c86-400d-8657-2c88be3f9979", 00:07:46.565 "is_configured": true, 00:07:46.565 "data_offset": 0, 00:07:46.565 "data_size": 65536 00:07:46.565 }, 00:07:46.565 { 00:07:46.565 "name": "BaseBdev3", 00:07:46.565 "uuid": "e109b2df-c58f-4c87-90cf-4958468d19de", 00:07:46.565 "is_configured": true, 00:07:46.565 "data_offset": 0, 00:07:46.565 "data_size": 65536 00:07:46.565 } 00:07:46.565 ] 00:07:46.565 }' 00:07:46.565 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.565 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.827 [2024-10-01 14:32:38.331834] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.827 "name": "Existed_Raid", 00:07:46.827 "aliases": [ 00:07:46.827 "464f6358-eba4-4369-80e1-9f75c027977b" 00:07:46.827 ], 00:07:46.827 "product_name": "Raid Volume", 00:07:46.827 "block_size": 512, 00:07:46.827 "num_blocks": 65536, 00:07:46.827 "uuid": "464f6358-eba4-4369-80e1-9f75c027977b", 00:07:46.827 "assigned_rate_limits": { 00:07:46.827 "rw_ios_per_sec": 0, 00:07:46.827 "rw_mbytes_per_sec": 0, 00:07:46.827 "r_mbytes_per_sec": 0, 00:07:46.827 "w_mbytes_per_sec": 0 00:07:46.827 }, 00:07:46.827 "claimed": false, 00:07:46.827 "zoned": false, 00:07:46.827 "supported_io_types": { 00:07:46.827 "read": true, 00:07:46.827 "write": true, 00:07:46.827 "unmap": false, 00:07:46.827 "flush": false, 00:07:46.827 "reset": true, 00:07:46.827 "nvme_admin": false, 00:07:46.827 "nvme_io": false, 00:07:46.827 "nvme_io_md": false, 00:07:46.827 "write_zeroes": true, 00:07:46.827 "zcopy": false, 00:07:46.827 "get_zone_info": false, 00:07:46.827 "zone_management": false, 00:07:46.827 "zone_append": false, 00:07:46.827 "compare": false, 00:07:46.827 "compare_and_write": false, 00:07:46.827 "abort": false, 00:07:46.827 "seek_hole": false, 00:07:46.827 "seek_data": false, 00:07:46.827 "copy": false, 00:07:46.827 "nvme_iov_md": false 00:07:46.827 }, 00:07:46.827 "memory_domains": [ 00:07:46.827 { 00:07:46.827 "dma_device_id": "system", 00:07:46.827 "dma_device_type": 1 00:07:46.827 }, 00:07:46.827 { 00:07:46.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.827 "dma_device_type": 2 00:07:46.827 }, 00:07:46.827 { 00:07:46.827 "dma_device_id": "system", 00:07:46.827 "dma_device_type": 1 00:07:46.827 }, 00:07:46.827 { 00:07:46.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.827 "dma_device_type": 2 00:07:46.827 }, 00:07:46.827 { 00:07:46.827 "dma_device_id": "system", 00:07:46.827 "dma_device_type": 1 00:07:46.827 }, 00:07:46.827 { 00:07:46.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.827 "dma_device_type": 2 00:07:46.827 } 00:07:46.827 ], 00:07:46.827 "driver_specific": { 00:07:46.827 "raid": { 00:07:46.827 "uuid": "464f6358-eba4-4369-80e1-9f75c027977b", 00:07:46.827 "strip_size_kb": 0, 00:07:46.827 "state": "online", 00:07:46.827 "raid_level": "raid1", 00:07:46.827 "superblock": false, 00:07:46.827 "num_base_bdevs": 3, 00:07:46.827 "num_base_bdevs_discovered": 3, 00:07:46.827 "num_base_bdevs_operational": 3, 00:07:46.827 "base_bdevs_list": [ 00:07:46.827 { 00:07:46.827 "name": "NewBaseBdev", 00:07:46.827 "uuid": "af26abb6-f394-4d02-b740-b40f15bc8404", 00:07:46.827 "is_configured": true, 00:07:46.827 "data_offset": 0, 00:07:46.827 "data_size": 65536 00:07:46.827 }, 00:07:46.827 { 00:07:46.827 "name": "BaseBdev2", 00:07:46.827 "uuid": "c287bbcb-2c86-400d-8657-2c88be3f9979", 00:07:46.827 "is_configured": true, 00:07:46.827 "data_offset": 0, 00:07:46.827 "data_size": 65536 00:07:46.827 }, 00:07:46.827 { 00:07:46.827 "name": "BaseBdev3", 00:07:46.827 "uuid": "e109b2df-c58f-4c87-90cf-4958468d19de", 00:07:46.827 "is_configured": true, 00:07:46.827 "data_offset": 0, 00:07:46.827 "data_size": 65536 00:07:46.827 } 00:07:46.827 ] 00:07:46.827 } 00:07:46.827 } 00:07:46.827 }' 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:46.827 BaseBdev2 00:07:46.827 BaseBdev3' 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.827 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.090 [2024-10-01 14:32:38.523515] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.090 [2024-10-01 14:32:38.523622] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.090 [2024-10-01 14:32:38.523694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.090 [2024-10-01 14:32:38.523987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.090 [2024-10-01 14:32:38.523998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65982 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65982 ']' 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65982 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65982 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.090 killing process with pid 65982 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65982' 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65982 00:07:47.090 [2024-10-01 14:32:38.559681] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.090 14:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65982 00:07:47.090 [2024-10-01 14:32:38.745795] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.095 ************************************ 00:07:48.095 END TEST raid_state_function_test 00:07:48.095 ************************************ 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.095 00:07:48.095 real 0m8.074s 00:07:48.095 user 0m12.894s 00:07:48.095 sys 0m1.199s 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.095 14:32:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:07:48.095 14:32:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:48.095 14:32:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.095 14:32:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.095 ************************************ 00:07:48.095 START TEST raid_state_function_test_sb 00:07:48.095 ************************************ 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:48.095 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:48.096 Process raid pid: 66576 00:07:48.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66576 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66576' 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66576 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66576 ']' 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.096 14:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:48.096 [2024-10-01 14:32:39.674496] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:48.096 [2024-10-01 14:32:39.674607] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.358 [2024-10-01 14:32:39.826093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.358 [2024-10-01 14:32:40.016981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.619 [2024-10-01 14:32:40.155190] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.619 [2024-10-01 14:32:40.155224] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.881 [2024-10-01 14:32:40.550932] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.881 [2024-10-01 14:32:40.550982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.881 [2024-10-01 14:32:40.550992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.881 [2024-10-01 14:32:40.551001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.881 [2024-10-01 14:32:40.551008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:48.881 [2024-10-01 14:32:40.551016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.881 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.882 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.882 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.143 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.143 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.143 "name": "Existed_Raid", 00:07:49.143 "uuid": "0968e6b9-a4db-4576-8f04-cbbf2cef4ca0", 00:07:49.143 "strip_size_kb": 0, 00:07:49.143 "state": "configuring", 00:07:49.143 "raid_level": "raid1", 00:07:49.143 "superblock": true, 00:07:49.143 "num_base_bdevs": 3, 00:07:49.143 "num_base_bdevs_discovered": 0, 00:07:49.143 "num_base_bdevs_operational": 3, 00:07:49.143 "base_bdevs_list": [ 00:07:49.143 { 00:07:49.143 "name": "BaseBdev1", 00:07:49.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.143 "is_configured": false, 00:07:49.143 "data_offset": 0, 00:07:49.143 "data_size": 0 00:07:49.143 }, 00:07:49.143 { 00:07:49.143 "name": "BaseBdev2", 00:07:49.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.143 "is_configured": false, 00:07:49.143 "data_offset": 0, 00:07:49.143 "data_size": 0 00:07:49.143 }, 00:07:49.143 { 00:07:49.143 "name": "BaseBdev3", 00:07:49.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.143 "is_configured": false, 00:07:49.143 "data_offset": 0, 00:07:49.143 "data_size": 0 00:07:49.143 } 00:07:49.143 ] 00:07:49.143 }' 00:07:49.143 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.143 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.406 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.406 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.406 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.406 [2024-10-01 14:32:40.858942] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.407 [2024-10-01 14:32:40.858977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.407 [2024-10-01 14:32:40.867013] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.407 [2024-10-01 14:32:40.867076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.407 [2024-10-01 14:32:40.867091] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.407 [2024-10-01 14:32:40.867107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.407 [2024-10-01 14:32:40.867118] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:49.407 [2024-10-01 14:32:40.867133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.407 [2024-10-01 14:32:40.911177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.407 BaseBdev1 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.407 [ 00:07:49.407 { 00:07:49.407 "name": "BaseBdev1", 00:07:49.407 "aliases": [ 00:07:49.407 "322c1b4e-ee27-4135-8db2-34db0c1dd238" 00:07:49.407 ], 00:07:49.407 "product_name": "Malloc disk", 00:07:49.407 "block_size": 512, 00:07:49.407 "num_blocks": 65536, 00:07:49.407 "uuid": "322c1b4e-ee27-4135-8db2-34db0c1dd238", 00:07:49.407 "assigned_rate_limits": { 00:07:49.407 "rw_ios_per_sec": 0, 00:07:49.407 "rw_mbytes_per_sec": 0, 00:07:49.407 "r_mbytes_per_sec": 0, 00:07:49.407 "w_mbytes_per_sec": 0 00:07:49.407 }, 00:07:49.407 "claimed": true, 00:07:49.407 "claim_type": "exclusive_write", 00:07:49.407 "zoned": false, 00:07:49.407 "supported_io_types": { 00:07:49.407 "read": true, 00:07:49.407 "write": true, 00:07:49.407 "unmap": true, 00:07:49.407 "flush": true, 00:07:49.407 "reset": true, 00:07:49.407 "nvme_admin": false, 00:07:49.407 "nvme_io": false, 00:07:49.407 "nvme_io_md": false, 00:07:49.407 "write_zeroes": true, 00:07:49.407 "zcopy": true, 00:07:49.407 "get_zone_info": false, 00:07:49.407 "zone_management": false, 00:07:49.407 "zone_append": false, 00:07:49.407 "compare": false, 00:07:49.407 "compare_and_write": false, 00:07:49.407 "abort": true, 00:07:49.407 "seek_hole": false, 00:07:49.407 "seek_data": false, 00:07:49.407 "copy": true, 00:07:49.407 "nvme_iov_md": false 00:07:49.407 }, 00:07:49.407 "memory_domains": [ 00:07:49.407 { 00:07:49.407 "dma_device_id": "system", 00:07:49.407 "dma_device_type": 1 00:07:49.407 }, 00:07:49.407 { 00:07:49.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.407 "dma_device_type": 2 00:07:49.407 } 00:07:49.407 ], 00:07:49.407 "driver_specific": {} 00:07:49.407 } 00:07:49.407 ] 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.407 "name": "Existed_Raid", 00:07:49.407 "uuid": "09e87b12-5bc0-4397-9efb-9f3162eb8cf5", 00:07:49.407 "strip_size_kb": 0, 00:07:49.407 "state": "configuring", 00:07:49.407 "raid_level": "raid1", 00:07:49.407 "superblock": true, 00:07:49.407 "num_base_bdevs": 3, 00:07:49.407 "num_base_bdevs_discovered": 1, 00:07:49.407 "num_base_bdevs_operational": 3, 00:07:49.407 "base_bdevs_list": [ 00:07:49.407 { 00:07:49.407 "name": "BaseBdev1", 00:07:49.407 "uuid": "322c1b4e-ee27-4135-8db2-34db0c1dd238", 00:07:49.407 "is_configured": true, 00:07:49.407 "data_offset": 2048, 00:07:49.407 "data_size": 63488 00:07:49.407 }, 00:07:49.407 { 00:07:49.407 "name": "BaseBdev2", 00:07:49.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.407 "is_configured": false, 00:07:49.407 "data_offset": 0, 00:07:49.407 "data_size": 0 00:07:49.407 }, 00:07:49.407 { 00:07:49.407 "name": "BaseBdev3", 00:07:49.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.407 "is_configured": false, 00:07:49.407 "data_offset": 0, 00:07:49.407 "data_size": 0 00:07:49.407 } 00:07:49.407 ] 00:07:49.407 }' 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.407 14:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.670 [2024-10-01 14:32:41.251279] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.670 [2024-10-01 14:32:41.251440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.670 [2024-10-01 14:32:41.263326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.670 [2024-10-01 14:32:41.265311] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.670 [2024-10-01 14:32:41.265446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.670 [2024-10-01 14:32:41.265567] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:49.670 [2024-10-01 14:32:41.265583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.670 "name": "Existed_Raid", 00:07:49.670 "uuid": "ab244d8b-f65d-4f4d-92c0-c7f6a1979690", 00:07:49.670 "strip_size_kb": 0, 00:07:49.670 "state": "configuring", 00:07:49.670 "raid_level": "raid1", 00:07:49.670 "superblock": true, 00:07:49.670 "num_base_bdevs": 3, 00:07:49.670 "num_base_bdevs_discovered": 1, 00:07:49.670 "num_base_bdevs_operational": 3, 00:07:49.670 "base_bdevs_list": [ 00:07:49.670 { 00:07:49.670 "name": "BaseBdev1", 00:07:49.670 "uuid": "322c1b4e-ee27-4135-8db2-34db0c1dd238", 00:07:49.670 "is_configured": true, 00:07:49.670 "data_offset": 2048, 00:07:49.670 "data_size": 63488 00:07:49.670 }, 00:07:49.670 { 00:07:49.670 "name": "BaseBdev2", 00:07:49.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.670 "is_configured": false, 00:07:49.670 "data_offset": 0, 00:07:49.670 "data_size": 0 00:07:49.670 }, 00:07:49.670 { 00:07:49.670 "name": "BaseBdev3", 00:07:49.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.670 "is_configured": false, 00:07:49.670 "data_offset": 0, 00:07:49.670 "data_size": 0 00:07:49.670 } 00:07:49.670 ] 00:07:49.670 }' 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.670 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.931 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:49.931 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.931 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.931 [2024-10-01 14:32:41.602093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:49.931 BaseBdev2 00:07:49.931 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.931 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:49.931 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:49.931 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:49.931 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:49.931 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:49.931 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:49.931 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:49.931 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.931 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.193 [ 00:07:50.193 { 00:07:50.193 "name": "BaseBdev2", 00:07:50.193 "aliases": [ 00:07:50.193 "0aa25aaf-2ea4-4645-9ddd-94378f8dcded" 00:07:50.193 ], 00:07:50.193 "product_name": "Malloc disk", 00:07:50.193 "block_size": 512, 00:07:50.193 "num_blocks": 65536, 00:07:50.193 "uuid": "0aa25aaf-2ea4-4645-9ddd-94378f8dcded", 00:07:50.193 "assigned_rate_limits": { 00:07:50.193 "rw_ios_per_sec": 0, 00:07:50.193 "rw_mbytes_per_sec": 0, 00:07:50.193 "r_mbytes_per_sec": 0, 00:07:50.193 "w_mbytes_per_sec": 0 00:07:50.193 }, 00:07:50.193 "claimed": true, 00:07:50.193 "claim_type": "exclusive_write", 00:07:50.193 "zoned": false, 00:07:50.193 "supported_io_types": { 00:07:50.193 "read": true, 00:07:50.193 "write": true, 00:07:50.193 "unmap": true, 00:07:50.193 "flush": true, 00:07:50.193 "reset": true, 00:07:50.193 "nvme_admin": false, 00:07:50.193 "nvme_io": false, 00:07:50.193 "nvme_io_md": false, 00:07:50.193 "write_zeroes": true, 00:07:50.193 "zcopy": true, 00:07:50.193 "get_zone_info": false, 00:07:50.193 "zone_management": false, 00:07:50.193 "zone_append": false, 00:07:50.193 "compare": false, 00:07:50.193 "compare_and_write": false, 00:07:50.193 "abort": true, 00:07:50.193 "seek_hole": false, 00:07:50.193 "seek_data": false, 00:07:50.193 "copy": true, 00:07:50.193 "nvme_iov_md": false 00:07:50.193 }, 00:07:50.193 "memory_domains": [ 00:07:50.193 { 00:07:50.193 "dma_device_id": "system", 00:07:50.193 "dma_device_type": 1 00:07:50.193 }, 00:07:50.193 { 00:07:50.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.193 "dma_device_type": 2 00:07:50.193 } 00:07:50.193 ], 00:07:50.193 "driver_specific": {} 00:07:50.193 } 00:07:50.193 ] 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.193 "name": "Existed_Raid", 00:07:50.193 "uuid": "ab244d8b-f65d-4f4d-92c0-c7f6a1979690", 00:07:50.193 "strip_size_kb": 0, 00:07:50.193 "state": "configuring", 00:07:50.193 "raid_level": "raid1", 00:07:50.193 "superblock": true, 00:07:50.193 "num_base_bdevs": 3, 00:07:50.193 "num_base_bdevs_discovered": 2, 00:07:50.193 "num_base_bdevs_operational": 3, 00:07:50.193 "base_bdevs_list": [ 00:07:50.193 { 00:07:50.193 "name": "BaseBdev1", 00:07:50.193 "uuid": "322c1b4e-ee27-4135-8db2-34db0c1dd238", 00:07:50.193 "is_configured": true, 00:07:50.193 "data_offset": 2048, 00:07:50.193 "data_size": 63488 00:07:50.193 }, 00:07:50.193 { 00:07:50.193 "name": "BaseBdev2", 00:07:50.193 "uuid": "0aa25aaf-2ea4-4645-9ddd-94378f8dcded", 00:07:50.193 "is_configured": true, 00:07:50.193 "data_offset": 2048, 00:07:50.193 "data_size": 63488 00:07:50.193 }, 00:07:50.193 { 00:07:50.193 "name": "BaseBdev3", 00:07:50.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.193 "is_configured": false, 00:07:50.193 "data_offset": 0, 00:07:50.193 "data_size": 0 00:07:50.193 } 00:07:50.193 ] 00:07:50.193 }' 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.193 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.455 [2024-10-01 14:32:41.980946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:50.455 [2024-10-01 14:32:41.981297] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.455 [2024-10-01 14:32:41.981344] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:50.455 BaseBdev3 00:07:50.455 [2024-10-01 14:32:41.981845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:50.455 [2024-10-01 14:32:41.981981] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.455 [2024-10-01 14:32:41.981991] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:50.455 [2024-10-01 14:32:41.982118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.455 14:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.455 [ 00:07:50.455 { 00:07:50.455 "name": "BaseBdev3", 00:07:50.455 "aliases": [ 00:07:50.455 "26cede77-89b0-4bac-95d2-c35048a5455d" 00:07:50.455 ], 00:07:50.455 "product_name": "Malloc disk", 00:07:50.455 "block_size": 512, 00:07:50.455 "num_blocks": 65536, 00:07:50.455 "uuid": "26cede77-89b0-4bac-95d2-c35048a5455d", 00:07:50.455 "assigned_rate_limits": { 00:07:50.455 "rw_ios_per_sec": 0, 00:07:50.455 "rw_mbytes_per_sec": 0, 00:07:50.455 "r_mbytes_per_sec": 0, 00:07:50.455 "w_mbytes_per_sec": 0 00:07:50.455 }, 00:07:50.455 "claimed": true, 00:07:50.455 "claim_type": "exclusive_write", 00:07:50.455 "zoned": false, 00:07:50.455 "supported_io_types": { 00:07:50.455 "read": true, 00:07:50.455 "write": true, 00:07:50.455 "unmap": true, 00:07:50.455 "flush": true, 00:07:50.455 "reset": true, 00:07:50.455 "nvme_admin": false, 00:07:50.455 "nvme_io": false, 00:07:50.455 "nvme_io_md": false, 00:07:50.455 "write_zeroes": true, 00:07:50.455 "zcopy": true, 00:07:50.455 "get_zone_info": false, 00:07:50.455 "zone_management": false, 00:07:50.455 "zone_append": false, 00:07:50.455 "compare": false, 00:07:50.455 "compare_and_write": false, 00:07:50.455 "abort": true, 00:07:50.455 "seek_hole": false, 00:07:50.455 "seek_data": false, 00:07:50.455 "copy": true, 00:07:50.455 "nvme_iov_md": false 00:07:50.455 }, 00:07:50.455 "memory_domains": [ 00:07:50.455 { 00:07:50.455 "dma_device_id": "system", 00:07:50.455 "dma_device_type": 1 00:07:50.455 }, 00:07:50.455 { 00:07:50.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.455 "dma_device_type": 2 00:07:50.455 } 00:07:50.455 ], 00:07:50.455 "driver_specific": {} 00:07:50.455 } 00:07:50.455 ] 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.455 "name": "Existed_Raid", 00:07:50.455 "uuid": "ab244d8b-f65d-4f4d-92c0-c7f6a1979690", 00:07:50.455 "strip_size_kb": 0, 00:07:50.455 "state": "online", 00:07:50.455 "raid_level": "raid1", 00:07:50.455 "superblock": true, 00:07:50.455 "num_base_bdevs": 3, 00:07:50.455 "num_base_bdevs_discovered": 3, 00:07:50.455 "num_base_bdevs_operational": 3, 00:07:50.455 "base_bdevs_list": [ 00:07:50.455 { 00:07:50.455 "name": "BaseBdev1", 00:07:50.455 "uuid": "322c1b4e-ee27-4135-8db2-34db0c1dd238", 00:07:50.455 "is_configured": true, 00:07:50.455 "data_offset": 2048, 00:07:50.455 "data_size": 63488 00:07:50.455 }, 00:07:50.455 { 00:07:50.455 "name": "BaseBdev2", 00:07:50.455 "uuid": "0aa25aaf-2ea4-4645-9ddd-94378f8dcded", 00:07:50.455 "is_configured": true, 00:07:50.455 "data_offset": 2048, 00:07:50.455 "data_size": 63488 00:07:50.455 }, 00:07:50.455 { 00:07:50.455 "name": "BaseBdev3", 00:07:50.455 "uuid": "26cede77-89b0-4bac-95d2-c35048a5455d", 00:07:50.455 "is_configured": true, 00:07:50.455 "data_offset": 2048, 00:07:50.455 "data_size": 63488 00:07:50.455 } 00:07:50.455 ] 00:07:50.455 }' 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.455 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.717 [2024-10-01 14:32:42.329433] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.717 "name": "Existed_Raid", 00:07:50.717 "aliases": [ 00:07:50.717 "ab244d8b-f65d-4f4d-92c0-c7f6a1979690" 00:07:50.717 ], 00:07:50.717 "product_name": "Raid Volume", 00:07:50.717 "block_size": 512, 00:07:50.717 "num_blocks": 63488, 00:07:50.717 "uuid": "ab244d8b-f65d-4f4d-92c0-c7f6a1979690", 00:07:50.717 "assigned_rate_limits": { 00:07:50.717 "rw_ios_per_sec": 0, 00:07:50.717 "rw_mbytes_per_sec": 0, 00:07:50.717 "r_mbytes_per_sec": 0, 00:07:50.717 "w_mbytes_per_sec": 0 00:07:50.717 }, 00:07:50.717 "claimed": false, 00:07:50.717 "zoned": false, 00:07:50.717 "supported_io_types": { 00:07:50.717 "read": true, 00:07:50.717 "write": true, 00:07:50.717 "unmap": false, 00:07:50.717 "flush": false, 00:07:50.717 "reset": true, 00:07:50.717 "nvme_admin": false, 00:07:50.717 "nvme_io": false, 00:07:50.717 "nvme_io_md": false, 00:07:50.717 "write_zeroes": true, 00:07:50.717 "zcopy": false, 00:07:50.717 "get_zone_info": false, 00:07:50.717 "zone_management": false, 00:07:50.717 "zone_append": false, 00:07:50.717 "compare": false, 00:07:50.717 "compare_and_write": false, 00:07:50.717 "abort": false, 00:07:50.717 "seek_hole": false, 00:07:50.717 "seek_data": false, 00:07:50.717 "copy": false, 00:07:50.717 "nvme_iov_md": false 00:07:50.717 }, 00:07:50.717 "memory_domains": [ 00:07:50.717 { 00:07:50.717 "dma_device_id": "system", 00:07:50.717 "dma_device_type": 1 00:07:50.717 }, 00:07:50.717 { 00:07:50.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.717 "dma_device_type": 2 00:07:50.717 }, 00:07:50.717 { 00:07:50.717 "dma_device_id": "system", 00:07:50.717 "dma_device_type": 1 00:07:50.717 }, 00:07:50.717 { 00:07:50.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.717 "dma_device_type": 2 00:07:50.717 }, 00:07:50.717 { 00:07:50.717 "dma_device_id": "system", 00:07:50.717 "dma_device_type": 1 00:07:50.717 }, 00:07:50.717 { 00:07:50.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.717 "dma_device_type": 2 00:07:50.717 } 00:07:50.717 ], 00:07:50.717 "driver_specific": { 00:07:50.717 "raid": { 00:07:50.717 "uuid": "ab244d8b-f65d-4f4d-92c0-c7f6a1979690", 00:07:50.717 "strip_size_kb": 0, 00:07:50.717 "state": "online", 00:07:50.717 "raid_level": "raid1", 00:07:50.717 "superblock": true, 00:07:50.717 "num_base_bdevs": 3, 00:07:50.717 "num_base_bdevs_discovered": 3, 00:07:50.717 "num_base_bdevs_operational": 3, 00:07:50.717 "base_bdevs_list": [ 00:07:50.717 { 00:07:50.717 "name": "BaseBdev1", 00:07:50.717 "uuid": "322c1b4e-ee27-4135-8db2-34db0c1dd238", 00:07:50.717 "is_configured": true, 00:07:50.717 "data_offset": 2048, 00:07:50.717 "data_size": 63488 00:07:50.717 }, 00:07:50.717 { 00:07:50.717 "name": "BaseBdev2", 00:07:50.717 "uuid": "0aa25aaf-2ea4-4645-9ddd-94378f8dcded", 00:07:50.717 "is_configured": true, 00:07:50.717 "data_offset": 2048, 00:07:50.717 "data_size": 63488 00:07:50.717 }, 00:07:50.717 { 00:07:50.717 "name": "BaseBdev3", 00:07:50.717 "uuid": "26cede77-89b0-4bac-95d2-c35048a5455d", 00:07:50.717 "is_configured": true, 00:07:50.717 "data_offset": 2048, 00:07:50.717 "data_size": 63488 00:07:50.717 } 00:07:50.717 ] 00:07:50.717 } 00:07:50.717 } 00:07:50.717 }' 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:50.717 BaseBdev2 00:07:50.717 BaseBdev3' 00:07:50.717 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.979 [2024-10-01 14:32:42.513149] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.979 "name": "Existed_Raid", 00:07:50.979 "uuid": "ab244d8b-f65d-4f4d-92c0-c7f6a1979690", 00:07:50.979 "strip_size_kb": 0, 00:07:50.979 "state": "online", 00:07:50.979 "raid_level": "raid1", 00:07:50.979 "superblock": true, 00:07:50.979 "num_base_bdevs": 3, 00:07:50.979 "num_base_bdevs_discovered": 2, 00:07:50.979 "num_base_bdevs_operational": 2, 00:07:50.979 "base_bdevs_list": [ 00:07:50.979 { 00:07:50.979 "name": null, 00:07:50.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.979 "is_configured": false, 00:07:50.979 "data_offset": 0, 00:07:50.979 "data_size": 63488 00:07:50.979 }, 00:07:50.979 { 00:07:50.979 "name": "BaseBdev2", 00:07:50.979 "uuid": "0aa25aaf-2ea4-4645-9ddd-94378f8dcded", 00:07:50.979 "is_configured": true, 00:07:50.979 "data_offset": 2048, 00:07:50.979 "data_size": 63488 00:07:50.979 }, 00:07:50.979 { 00:07:50.979 "name": "BaseBdev3", 00:07:50.979 "uuid": "26cede77-89b0-4bac-95d2-c35048a5455d", 00:07:50.979 "is_configured": true, 00:07:50.979 "data_offset": 2048, 00:07:50.979 "data_size": 63488 00:07:50.979 } 00:07:50.979 ] 00:07:50.979 }' 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.979 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.241 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:51.241 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.241 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:51.241 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.241 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.241 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.241 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.241 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:51.241 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:51.241 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:51.241 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.241 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.241 [2024-10-01 14:32:42.920701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:51.503 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.503 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:51.503 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.503 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:51.503 14:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.503 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.503 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.503 14:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.503 [2024-10-01 14:32:43.020062] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:51.503 [2024-10-01 14:32:43.020156] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.503 [2024-10-01 14:32:43.079513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.503 [2024-10-01 14:32:43.079559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.503 [2024-10-01 14:32:43.079570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.503 BaseBdev2 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.503 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.503 [ 00:07:51.503 { 00:07:51.503 "name": "BaseBdev2", 00:07:51.503 "aliases": [ 00:07:51.503 "fa4746d8-854e-46c6-a3b3-7d8447111999" 00:07:51.503 ], 00:07:51.503 "product_name": "Malloc disk", 00:07:51.503 "block_size": 512, 00:07:51.503 "num_blocks": 65536, 00:07:51.503 "uuid": "fa4746d8-854e-46c6-a3b3-7d8447111999", 00:07:51.503 "assigned_rate_limits": { 00:07:51.503 "rw_ios_per_sec": 0, 00:07:51.504 "rw_mbytes_per_sec": 0, 00:07:51.504 "r_mbytes_per_sec": 0, 00:07:51.504 "w_mbytes_per_sec": 0 00:07:51.504 }, 00:07:51.504 "claimed": false, 00:07:51.504 "zoned": false, 00:07:51.504 "supported_io_types": { 00:07:51.504 "read": true, 00:07:51.504 "write": true, 00:07:51.504 "unmap": true, 00:07:51.504 "flush": true, 00:07:51.504 "reset": true, 00:07:51.504 "nvme_admin": false, 00:07:51.504 "nvme_io": false, 00:07:51.504 "nvme_io_md": false, 00:07:51.504 "write_zeroes": true, 00:07:51.504 "zcopy": true, 00:07:51.504 "get_zone_info": false, 00:07:51.504 "zone_management": false, 00:07:51.504 "zone_append": false, 00:07:51.504 "compare": false, 00:07:51.504 "compare_and_write": false, 00:07:51.504 "abort": true, 00:07:51.504 "seek_hole": false, 00:07:51.504 "seek_data": false, 00:07:51.504 "copy": true, 00:07:51.504 "nvme_iov_md": false 00:07:51.504 }, 00:07:51.504 "memory_domains": [ 00:07:51.504 { 00:07:51.504 "dma_device_id": "system", 00:07:51.504 "dma_device_type": 1 00:07:51.504 }, 00:07:51.504 { 00:07:51.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.504 "dma_device_type": 2 00:07:51.504 } 00:07:51.504 ], 00:07:51.504 "driver_specific": {} 00:07:51.504 } 00:07:51.504 ] 00:07:51.504 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.504 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:51.504 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:51.504 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:51.504 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:51.504 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.504 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.764 BaseBdev3 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.764 [ 00:07:51.764 { 00:07:51.764 "name": "BaseBdev3", 00:07:51.764 "aliases": [ 00:07:51.764 "cdf3df3b-d553-47f3-9f40-abbdfaf3c4d7" 00:07:51.764 ], 00:07:51.764 "product_name": "Malloc disk", 00:07:51.764 "block_size": 512, 00:07:51.764 "num_blocks": 65536, 00:07:51.764 "uuid": "cdf3df3b-d553-47f3-9f40-abbdfaf3c4d7", 00:07:51.764 "assigned_rate_limits": { 00:07:51.764 "rw_ios_per_sec": 0, 00:07:51.764 "rw_mbytes_per_sec": 0, 00:07:51.764 "r_mbytes_per_sec": 0, 00:07:51.764 "w_mbytes_per_sec": 0 00:07:51.764 }, 00:07:51.764 "claimed": false, 00:07:51.764 "zoned": false, 00:07:51.764 "supported_io_types": { 00:07:51.764 "read": true, 00:07:51.764 "write": true, 00:07:51.764 "unmap": true, 00:07:51.764 "flush": true, 00:07:51.764 "reset": true, 00:07:51.764 "nvme_admin": false, 00:07:51.764 "nvme_io": false, 00:07:51.764 "nvme_io_md": false, 00:07:51.764 "write_zeroes": true, 00:07:51.764 "zcopy": true, 00:07:51.764 "get_zone_info": false, 00:07:51.764 "zone_management": false, 00:07:51.764 "zone_append": false, 00:07:51.764 "compare": false, 00:07:51.764 "compare_and_write": false, 00:07:51.764 "abort": true, 00:07:51.764 "seek_hole": false, 00:07:51.764 "seek_data": false, 00:07:51.764 "copy": true, 00:07:51.764 "nvme_iov_md": false 00:07:51.764 }, 00:07:51.764 "memory_domains": [ 00:07:51.764 { 00:07:51.764 "dma_device_id": "system", 00:07:51.764 "dma_device_type": 1 00:07:51.764 }, 00:07:51.764 { 00:07:51.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.764 "dma_device_type": 2 00:07:51.764 } 00:07:51.764 ], 00:07:51.764 "driver_specific": {} 00:07:51.764 } 00:07:51.764 ] 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.764 [2024-10-01 14:32:43.228139] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.764 [2024-10-01 14:32:43.228286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.764 [2024-10-01 14:32:43.228355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.764 [2024-10-01 14:32:43.230326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.764 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.765 "name": "Existed_Raid", 00:07:51.765 "uuid": "c97114ba-394c-4c8f-953a-5ca5fa437618", 00:07:51.765 "strip_size_kb": 0, 00:07:51.765 "state": "configuring", 00:07:51.765 "raid_level": "raid1", 00:07:51.765 "superblock": true, 00:07:51.765 "num_base_bdevs": 3, 00:07:51.765 "num_base_bdevs_discovered": 2, 00:07:51.765 "num_base_bdevs_operational": 3, 00:07:51.765 "base_bdevs_list": [ 00:07:51.765 { 00:07:51.765 "name": "BaseBdev1", 00:07:51.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.765 "is_configured": false, 00:07:51.765 "data_offset": 0, 00:07:51.765 "data_size": 0 00:07:51.765 }, 00:07:51.765 { 00:07:51.765 "name": "BaseBdev2", 00:07:51.765 "uuid": "fa4746d8-854e-46c6-a3b3-7d8447111999", 00:07:51.765 "is_configured": true, 00:07:51.765 "data_offset": 2048, 00:07:51.765 "data_size": 63488 00:07:51.765 }, 00:07:51.765 { 00:07:51.765 "name": "BaseBdev3", 00:07:51.765 "uuid": "cdf3df3b-d553-47f3-9f40-abbdfaf3c4d7", 00:07:51.765 "is_configured": true, 00:07:51.765 "data_offset": 2048, 00:07:51.765 "data_size": 63488 00:07:51.765 } 00:07:51.765 ] 00:07:51.765 }' 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.765 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.027 [2024-10-01 14:32:43.544202] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.027 "name": "Existed_Raid", 00:07:52.027 "uuid": "c97114ba-394c-4c8f-953a-5ca5fa437618", 00:07:52.027 "strip_size_kb": 0, 00:07:52.027 "state": "configuring", 00:07:52.027 "raid_level": "raid1", 00:07:52.027 "superblock": true, 00:07:52.027 "num_base_bdevs": 3, 00:07:52.027 "num_base_bdevs_discovered": 1, 00:07:52.027 "num_base_bdevs_operational": 3, 00:07:52.027 "base_bdevs_list": [ 00:07:52.027 { 00:07:52.027 "name": "BaseBdev1", 00:07:52.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.027 "is_configured": false, 00:07:52.027 "data_offset": 0, 00:07:52.027 "data_size": 0 00:07:52.027 }, 00:07:52.027 { 00:07:52.027 "name": null, 00:07:52.027 "uuid": "fa4746d8-854e-46c6-a3b3-7d8447111999", 00:07:52.027 "is_configured": false, 00:07:52.027 "data_offset": 0, 00:07:52.027 "data_size": 63488 00:07:52.027 }, 00:07:52.027 { 00:07:52.027 "name": "BaseBdev3", 00:07:52.027 "uuid": "cdf3df3b-d553-47f3-9f40-abbdfaf3c4d7", 00:07:52.027 "is_configured": true, 00:07:52.027 "data_offset": 2048, 00:07:52.027 "data_size": 63488 00:07:52.027 } 00:07:52.027 ] 00:07:52.027 }' 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.027 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.290 [2024-10-01 14:32:43.906741] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.290 BaseBdev1 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.290 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.290 [ 00:07:52.290 { 00:07:52.290 "name": "BaseBdev1", 00:07:52.290 "aliases": [ 00:07:52.290 "06731e3c-b7b2-4e19-bd61-f64d94ab3e72" 00:07:52.290 ], 00:07:52.290 "product_name": "Malloc disk", 00:07:52.290 "block_size": 512, 00:07:52.290 "num_blocks": 65536, 00:07:52.290 "uuid": "06731e3c-b7b2-4e19-bd61-f64d94ab3e72", 00:07:52.290 "assigned_rate_limits": { 00:07:52.290 "rw_ios_per_sec": 0, 00:07:52.290 "rw_mbytes_per_sec": 0, 00:07:52.290 "r_mbytes_per_sec": 0, 00:07:52.290 "w_mbytes_per_sec": 0 00:07:52.290 }, 00:07:52.290 "claimed": true, 00:07:52.290 "claim_type": "exclusive_write", 00:07:52.290 "zoned": false, 00:07:52.290 "supported_io_types": { 00:07:52.290 "read": true, 00:07:52.290 "write": true, 00:07:52.290 "unmap": true, 00:07:52.290 "flush": true, 00:07:52.290 "reset": true, 00:07:52.290 "nvme_admin": false, 00:07:52.290 "nvme_io": false, 00:07:52.290 "nvme_io_md": false, 00:07:52.290 "write_zeroes": true, 00:07:52.290 "zcopy": true, 00:07:52.290 "get_zone_info": false, 00:07:52.290 "zone_management": false, 00:07:52.290 "zone_append": false, 00:07:52.290 "compare": false, 00:07:52.290 "compare_and_write": false, 00:07:52.290 "abort": true, 00:07:52.290 "seek_hole": false, 00:07:52.290 "seek_data": false, 00:07:52.290 "copy": true, 00:07:52.290 "nvme_iov_md": false 00:07:52.290 }, 00:07:52.290 "memory_domains": [ 00:07:52.290 { 00:07:52.290 "dma_device_id": "system", 00:07:52.290 "dma_device_type": 1 00:07:52.290 }, 00:07:52.290 { 00:07:52.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.291 "dma_device_type": 2 00:07:52.291 } 00:07:52.291 ], 00:07:52.291 "driver_specific": {} 00:07:52.291 } 00:07:52.291 ] 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.291 "name": "Existed_Raid", 00:07:52.291 "uuid": "c97114ba-394c-4c8f-953a-5ca5fa437618", 00:07:52.291 "strip_size_kb": 0, 00:07:52.291 "state": "configuring", 00:07:52.291 "raid_level": "raid1", 00:07:52.291 "superblock": true, 00:07:52.291 "num_base_bdevs": 3, 00:07:52.291 "num_base_bdevs_discovered": 2, 00:07:52.291 "num_base_bdevs_operational": 3, 00:07:52.291 "base_bdevs_list": [ 00:07:52.291 { 00:07:52.291 "name": "BaseBdev1", 00:07:52.291 "uuid": "06731e3c-b7b2-4e19-bd61-f64d94ab3e72", 00:07:52.291 "is_configured": true, 00:07:52.291 "data_offset": 2048, 00:07:52.291 "data_size": 63488 00:07:52.291 }, 00:07:52.291 { 00:07:52.291 "name": null, 00:07:52.291 "uuid": "fa4746d8-854e-46c6-a3b3-7d8447111999", 00:07:52.291 "is_configured": false, 00:07:52.291 "data_offset": 0, 00:07:52.291 "data_size": 63488 00:07:52.291 }, 00:07:52.291 { 00:07:52.291 "name": "BaseBdev3", 00:07:52.291 "uuid": "cdf3df3b-d553-47f3-9f40-abbdfaf3c4d7", 00:07:52.291 "is_configured": true, 00:07:52.291 "data_offset": 2048, 00:07:52.291 "data_size": 63488 00:07:52.291 } 00:07:52.291 ] 00:07:52.291 }' 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.291 14:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.865 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:52.865 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.865 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.865 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.865 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.866 [2024-10-01 14:32:44.282896] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.866 "name": "Existed_Raid", 00:07:52.866 "uuid": "c97114ba-394c-4c8f-953a-5ca5fa437618", 00:07:52.866 "strip_size_kb": 0, 00:07:52.866 "state": "configuring", 00:07:52.866 "raid_level": "raid1", 00:07:52.866 "superblock": true, 00:07:52.866 "num_base_bdevs": 3, 00:07:52.866 "num_base_bdevs_discovered": 1, 00:07:52.866 "num_base_bdevs_operational": 3, 00:07:52.866 "base_bdevs_list": [ 00:07:52.866 { 00:07:52.866 "name": "BaseBdev1", 00:07:52.866 "uuid": "06731e3c-b7b2-4e19-bd61-f64d94ab3e72", 00:07:52.866 "is_configured": true, 00:07:52.866 "data_offset": 2048, 00:07:52.866 "data_size": 63488 00:07:52.866 }, 00:07:52.866 { 00:07:52.866 "name": null, 00:07:52.866 "uuid": "fa4746d8-854e-46c6-a3b3-7d8447111999", 00:07:52.866 "is_configured": false, 00:07:52.866 "data_offset": 0, 00:07:52.866 "data_size": 63488 00:07:52.866 }, 00:07:52.866 { 00:07:52.866 "name": null, 00:07:52.866 "uuid": "cdf3df3b-d553-47f3-9f40-abbdfaf3c4d7", 00:07:52.866 "is_configured": false, 00:07:52.866 "data_offset": 0, 00:07:52.866 "data_size": 63488 00:07:52.866 } 00:07:52.866 ] 00:07:52.866 }' 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.866 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.128 [2024-10-01 14:32:44.638970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.128 "name": "Existed_Raid", 00:07:53.128 "uuid": "c97114ba-394c-4c8f-953a-5ca5fa437618", 00:07:53.128 "strip_size_kb": 0, 00:07:53.128 "state": "configuring", 00:07:53.128 "raid_level": "raid1", 00:07:53.128 "superblock": true, 00:07:53.128 "num_base_bdevs": 3, 00:07:53.128 "num_base_bdevs_discovered": 2, 00:07:53.128 "num_base_bdevs_operational": 3, 00:07:53.128 "base_bdevs_list": [ 00:07:53.128 { 00:07:53.128 "name": "BaseBdev1", 00:07:53.128 "uuid": "06731e3c-b7b2-4e19-bd61-f64d94ab3e72", 00:07:53.128 "is_configured": true, 00:07:53.128 "data_offset": 2048, 00:07:53.128 "data_size": 63488 00:07:53.128 }, 00:07:53.128 { 00:07:53.128 "name": null, 00:07:53.128 "uuid": "fa4746d8-854e-46c6-a3b3-7d8447111999", 00:07:53.128 "is_configured": false, 00:07:53.128 "data_offset": 0, 00:07:53.128 "data_size": 63488 00:07:53.128 }, 00:07:53.128 { 00:07:53.128 "name": "BaseBdev3", 00:07:53.128 "uuid": "cdf3df3b-d553-47f3-9f40-abbdfaf3c4d7", 00:07:53.128 "is_configured": true, 00:07:53.128 "data_offset": 2048, 00:07:53.128 "data_size": 63488 00:07:53.128 } 00:07:53.128 ] 00:07:53.128 }' 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.128 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.391 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:53.391 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.391 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.391 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.391 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.391 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:53.391 14:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:53.391 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.391 14:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.391 [2024-10-01 14:32:44.999098] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.391 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.652 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.652 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.652 "name": "Existed_Raid", 00:07:53.652 "uuid": "c97114ba-394c-4c8f-953a-5ca5fa437618", 00:07:53.652 "strip_size_kb": 0, 00:07:53.652 "state": "configuring", 00:07:53.652 "raid_level": "raid1", 00:07:53.652 "superblock": true, 00:07:53.652 "num_base_bdevs": 3, 00:07:53.652 "num_base_bdevs_discovered": 1, 00:07:53.652 "num_base_bdevs_operational": 3, 00:07:53.652 "base_bdevs_list": [ 00:07:53.652 { 00:07:53.652 "name": null, 00:07:53.652 "uuid": "06731e3c-b7b2-4e19-bd61-f64d94ab3e72", 00:07:53.652 "is_configured": false, 00:07:53.652 "data_offset": 0, 00:07:53.652 "data_size": 63488 00:07:53.652 }, 00:07:53.652 { 00:07:53.652 "name": null, 00:07:53.652 "uuid": "fa4746d8-854e-46c6-a3b3-7d8447111999", 00:07:53.652 "is_configured": false, 00:07:53.652 "data_offset": 0, 00:07:53.652 "data_size": 63488 00:07:53.652 }, 00:07:53.652 { 00:07:53.652 "name": "BaseBdev3", 00:07:53.652 "uuid": "cdf3df3b-d553-47f3-9f40-abbdfaf3c4d7", 00:07:53.652 "is_configured": true, 00:07:53.652 "data_offset": 2048, 00:07:53.652 "data_size": 63488 00:07:53.652 } 00:07:53.652 ] 00:07:53.652 }' 00:07:53.652 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.652 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.913 [2024-10-01 14:32:45.446018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.913 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.914 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.914 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.914 "name": "Existed_Raid", 00:07:53.914 "uuid": "c97114ba-394c-4c8f-953a-5ca5fa437618", 00:07:53.914 "strip_size_kb": 0, 00:07:53.914 "state": "configuring", 00:07:53.914 "raid_level": "raid1", 00:07:53.914 "superblock": true, 00:07:53.914 "num_base_bdevs": 3, 00:07:53.914 "num_base_bdevs_discovered": 2, 00:07:53.914 "num_base_bdevs_operational": 3, 00:07:53.914 "base_bdevs_list": [ 00:07:53.914 { 00:07:53.914 "name": null, 00:07:53.914 "uuid": "06731e3c-b7b2-4e19-bd61-f64d94ab3e72", 00:07:53.914 "is_configured": false, 00:07:53.914 "data_offset": 0, 00:07:53.914 "data_size": 63488 00:07:53.914 }, 00:07:53.914 { 00:07:53.914 "name": "BaseBdev2", 00:07:53.914 "uuid": "fa4746d8-854e-46c6-a3b3-7d8447111999", 00:07:53.914 "is_configured": true, 00:07:53.914 "data_offset": 2048, 00:07:53.914 "data_size": 63488 00:07:53.914 }, 00:07:53.914 { 00:07:53.914 "name": "BaseBdev3", 00:07:53.914 "uuid": "cdf3df3b-d553-47f3-9f40-abbdfaf3c4d7", 00:07:53.914 "is_configured": true, 00:07:53.914 "data_offset": 2048, 00:07:53.914 "data_size": 63488 00:07:53.914 } 00:07:53.914 ] 00:07:53.914 }' 00:07:53.914 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.914 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 06731e3c-b7b2-4e19-bd61-f64d94ab3e72 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.175 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.437 NewBaseBdev 00:07:54.437 [2024-10-01 14:32:45.860330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:54.437 [2024-10-01 14:32:45.860518] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:54.437 [2024-10-01 14:32:45.860531] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:54.437 [2024-10-01 14:32:45.860811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:54.437 [2024-10-01 14:32:45.860952] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:54.437 [2024-10-01 14:32:45.860962] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:07:54.437 [2024-10-01 14:32:45.861078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.437 [ 00:07:54.437 { 00:07:54.437 "name": "NewBaseBdev", 00:07:54.437 "aliases": [ 00:07:54.437 "06731e3c-b7b2-4e19-bd61-f64d94ab3e72" 00:07:54.437 ], 00:07:54.437 "product_name": "Malloc disk", 00:07:54.437 "block_size": 512, 00:07:54.437 "num_blocks": 65536, 00:07:54.437 "uuid": "06731e3c-b7b2-4e19-bd61-f64d94ab3e72", 00:07:54.437 "assigned_rate_limits": { 00:07:54.437 "rw_ios_per_sec": 0, 00:07:54.437 "rw_mbytes_per_sec": 0, 00:07:54.437 "r_mbytes_per_sec": 0, 00:07:54.437 "w_mbytes_per_sec": 0 00:07:54.437 }, 00:07:54.437 "claimed": true, 00:07:54.437 "claim_type": "exclusive_write", 00:07:54.437 "zoned": false, 00:07:54.437 "supported_io_types": { 00:07:54.437 "read": true, 00:07:54.437 "write": true, 00:07:54.437 "unmap": true, 00:07:54.437 "flush": true, 00:07:54.437 "reset": true, 00:07:54.437 "nvme_admin": false, 00:07:54.437 "nvme_io": false, 00:07:54.437 "nvme_io_md": false, 00:07:54.437 "write_zeroes": true, 00:07:54.437 "zcopy": true, 00:07:54.437 "get_zone_info": false, 00:07:54.437 "zone_management": false, 00:07:54.437 "zone_append": false, 00:07:54.437 "compare": false, 00:07:54.437 "compare_and_write": false, 00:07:54.437 "abort": true, 00:07:54.437 "seek_hole": false, 00:07:54.437 "seek_data": false, 00:07:54.437 "copy": true, 00:07:54.437 "nvme_iov_md": false 00:07:54.437 }, 00:07:54.437 "memory_domains": [ 00:07:54.437 { 00:07:54.437 "dma_device_id": "system", 00:07:54.437 "dma_device_type": 1 00:07:54.437 }, 00:07:54.437 { 00:07:54.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.437 "dma_device_type": 2 00:07:54.437 } 00:07:54.437 ], 00:07:54.437 "driver_specific": {} 00:07:54.437 } 00:07:54.437 ] 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.437 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.437 "name": "Existed_Raid", 00:07:54.437 "uuid": "c97114ba-394c-4c8f-953a-5ca5fa437618", 00:07:54.437 "strip_size_kb": 0, 00:07:54.437 "state": "online", 00:07:54.437 "raid_level": "raid1", 00:07:54.437 "superblock": true, 00:07:54.437 "num_base_bdevs": 3, 00:07:54.437 "num_base_bdevs_discovered": 3, 00:07:54.437 "num_base_bdevs_operational": 3, 00:07:54.437 "base_bdevs_list": [ 00:07:54.437 { 00:07:54.437 "name": "NewBaseBdev", 00:07:54.437 "uuid": "06731e3c-b7b2-4e19-bd61-f64d94ab3e72", 00:07:54.437 "is_configured": true, 00:07:54.437 "data_offset": 2048, 00:07:54.437 "data_size": 63488 00:07:54.437 }, 00:07:54.438 { 00:07:54.438 "name": "BaseBdev2", 00:07:54.438 "uuid": "fa4746d8-854e-46c6-a3b3-7d8447111999", 00:07:54.438 "is_configured": true, 00:07:54.438 "data_offset": 2048, 00:07:54.438 "data_size": 63488 00:07:54.438 }, 00:07:54.438 { 00:07:54.438 "name": "BaseBdev3", 00:07:54.438 "uuid": "cdf3df3b-d553-47f3-9f40-abbdfaf3c4d7", 00:07:54.438 "is_configured": true, 00:07:54.438 "data_offset": 2048, 00:07:54.438 "data_size": 63488 00:07:54.438 } 00:07:54.438 ] 00:07:54.438 }' 00:07:54.438 14:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.438 14:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.700 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:54.700 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:54.700 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.700 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.700 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.700 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.700 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.700 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:54.700 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.700 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.700 [2024-10-01 14:32:46.220828] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.700 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.700 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.700 "name": "Existed_Raid", 00:07:54.700 "aliases": [ 00:07:54.700 "c97114ba-394c-4c8f-953a-5ca5fa437618" 00:07:54.700 ], 00:07:54.700 "product_name": "Raid Volume", 00:07:54.700 "block_size": 512, 00:07:54.700 "num_blocks": 63488, 00:07:54.700 "uuid": "c97114ba-394c-4c8f-953a-5ca5fa437618", 00:07:54.700 "assigned_rate_limits": { 00:07:54.700 "rw_ios_per_sec": 0, 00:07:54.700 "rw_mbytes_per_sec": 0, 00:07:54.700 "r_mbytes_per_sec": 0, 00:07:54.700 "w_mbytes_per_sec": 0 00:07:54.700 }, 00:07:54.700 "claimed": false, 00:07:54.700 "zoned": false, 00:07:54.700 "supported_io_types": { 00:07:54.700 "read": true, 00:07:54.700 "write": true, 00:07:54.700 "unmap": false, 00:07:54.700 "flush": false, 00:07:54.700 "reset": true, 00:07:54.700 "nvme_admin": false, 00:07:54.700 "nvme_io": false, 00:07:54.700 "nvme_io_md": false, 00:07:54.700 "write_zeroes": true, 00:07:54.700 "zcopy": false, 00:07:54.700 "get_zone_info": false, 00:07:54.700 "zone_management": false, 00:07:54.700 "zone_append": false, 00:07:54.700 "compare": false, 00:07:54.700 "compare_and_write": false, 00:07:54.700 "abort": false, 00:07:54.700 "seek_hole": false, 00:07:54.700 "seek_data": false, 00:07:54.700 "copy": false, 00:07:54.700 "nvme_iov_md": false 00:07:54.700 }, 00:07:54.700 "memory_domains": [ 00:07:54.700 { 00:07:54.700 "dma_device_id": "system", 00:07:54.700 "dma_device_type": 1 00:07:54.700 }, 00:07:54.700 { 00:07:54.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.700 "dma_device_type": 2 00:07:54.700 }, 00:07:54.700 { 00:07:54.700 "dma_device_id": "system", 00:07:54.700 "dma_device_type": 1 00:07:54.700 }, 00:07:54.700 { 00:07:54.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.700 "dma_device_type": 2 00:07:54.700 }, 00:07:54.700 { 00:07:54.700 "dma_device_id": "system", 00:07:54.700 "dma_device_type": 1 00:07:54.700 }, 00:07:54.700 { 00:07:54.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.700 "dma_device_type": 2 00:07:54.700 } 00:07:54.700 ], 00:07:54.700 "driver_specific": { 00:07:54.700 "raid": { 00:07:54.700 "uuid": "c97114ba-394c-4c8f-953a-5ca5fa437618", 00:07:54.700 "strip_size_kb": 0, 00:07:54.700 "state": "online", 00:07:54.700 "raid_level": "raid1", 00:07:54.700 "superblock": true, 00:07:54.700 "num_base_bdevs": 3, 00:07:54.700 "num_base_bdevs_discovered": 3, 00:07:54.700 "num_base_bdevs_operational": 3, 00:07:54.700 "base_bdevs_list": [ 00:07:54.700 { 00:07:54.700 "name": "NewBaseBdev", 00:07:54.700 "uuid": "06731e3c-b7b2-4e19-bd61-f64d94ab3e72", 00:07:54.700 "is_configured": true, 00:07:54.700 "data_offset": 2048, 00:07:54.700 "data_size": 63488 00:07:54.700 }, 00:07:54.700 { 00:07:54.700 "name": "BaseBdev2", 00:07:54.700 "uuid": "fa4746d8-854e-46c6-a3b3-7d8447111999", 00:07:54.700 "is_configured": true, 00:07:54.700 "data_offset": 2048, 00:07:54.700 "data_size": 63488 00:07:54.700 }, 00:07:54.700 { 00:07:54.700 "name": "BaseBdev3", 00:07:54.700 "uuid": "cdf3df3b-d553-47f3-9f40-abbdfaf3c4d7", 00:07:54.700 "is_configured": true, 00:07:54.700 "data_offset": 2048, 00:07:54.700 "data_size": 63488 00:07:54.700 } 00:07:54.700 ] 00:07:54.700 } 00:07:54.700 } 00:07:54.700 }' 00:07:54.700 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:54.701 BaseBdev2 00:07:54.701 BaseBdev3' 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.701 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.962 [2024-10-01 14:32:46.404493] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.962 [2024-10-01 14:32:46.404605] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.962 [2024-10-01 14:32:46.404677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.962 [2024-10-01 14:32:46.404977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.962 [2024-10-01 14:32:46.404987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66576 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66576 ']' 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66576 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66576 00:07:54.962 killing process with pid 66576 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66576' 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66576 00:07:54.962 [2024-10-01 14:32:46.434671] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.962 14:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66576 00:07:54.962 [2024-10-01 14:32:46.623084] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.902 ************************************ 00:07:55.902 END TEST raid_state_function_test_sb 00:07:55.902 ************************************ 00:07:55.902 14:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:55.902 00:07:55.902 real 0m7.846s 00:07:55.902 user 0m12.436s 00:07:55.902 sys 0m1.220s 00:07:55.902 14:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.902 14:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.902 14:32:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:07:55.902 14:32:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:55.902 14:32:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.902 14:32:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.902 ************************************ 00:07:55.902 START TEST raid_superblock_test 00:07:55.902 ************************************ 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:55.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67168 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67168 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 67168 ']' 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.902 14:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.161 [2024-10-01 14:32:47.601417] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:56.161 [2024-10-01 14:32:47.601786] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67168 ] 00:07:56.161 [2024-10-01 14:32:47.757412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.421 [2024-10-01 14:32:47.945160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.421 [2024-10-01 14:32:48.081056] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.421 [2024-10-01 14:32:48.081091] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.992 malloc1 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.992 [2024-10-01 14:32:48.471510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:56.992 [2024-10-01 14:32:48.471601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.992 [2024-10-01 14:32:48.471627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:56.992 [2024-10-01 14:32:48.471639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.992 [2024-10-01 14:32:48.474008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.992 [2024-10-01 14:32:48.474052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:56.992 pt1 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.992 malloc2 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.992 [2024-10-01 14:32:48.533041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:56.992 [2024-10-01 14:32:48.533111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.992 [2024-10-01 14:32:48.533139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:56.992 [2024-10-01 14:32:48.533148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.992 [2024-10-01 14:32:48.535350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.992 [2024-10-01 14:32:48.535383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:56.992 pt2 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.992 malloc3 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.992 [2024-10-01 14:32:48.569132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:56.992 [2024-10-01 14:32:48.569179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.992 [2024-10-01 14:32:48.569199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:56.992 [2024-10-01 14:32:48.569208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.992 [2024-10-01 14:32:48.571348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.992 [2024-10-01 14:32:48.571378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:56.992 pt3 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.992 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.993 [2024-10-01 14:32:48.577196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:56.993 [2024-10-01 14:32:48.579071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.993 [2024-10-01 14:32:48.579138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:56.993 [2024-10-01 14:32:48.579298] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:56.993 [2024-10-01 14:32:48.579310] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:56.993 [2024-10-01 14:32:48.579567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:56.993 [2024-10-01 14:32:48.579738] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:56.993 [2024-10-01 14:32:48.579748] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:56.993 [2024-10-01 14:32:48.579906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.993 "name": "raid_bdev1", 00:07:56.993 "uuid": "cb31261f-39d5-43f1-b5b5-6eaaaf255752", 00:07:56.993 "strip_size_kb": 0, 00:07:56.993 "state": "online", 00:07:56.993 "raid_level": "raid1", 00:07:56.993 "superblock": true, 00:07:56.993 "num_base_bdevs": 3, 00:07:56.993 "num_base_bdevs_discovered": 3, 00:07:56.993 "num_base_bdevs_operational": 3, 00:07:56.993 "base_bdevs_list": [ 00:07:56.993 { 00:07:56.993 "name": "pt1", 00:07:56.993 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.993 "is_configured": true, 00:07:56.993 "data_offset": 2048, 00:07:56.993 "data_size": 63488 00:07:56.993 }, 00:07:56.993 { 00:07:56.993 "name": "pt2", 00:07:56.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.993 "is_configured": true, 00:07:56.993 "data_offset": 2048, 00:07:56.993 "data_size": 63488 00:07:56.993 }, 00:07:56.993 { 00:07:56.993 "name": "pt3", 00:07:56.993 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:56.993 "is_configured": true, 00:07:56.993 "data_offset": 2048, 00:07:56.993 "data_size": 63488 00:07:56.993 } 00:07:56.993 ] 00:07:56.993 }' 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.993 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.254 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:57.254 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:57.254 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:57.254 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:57.254 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:57.254 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:57.254 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.254 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:57.254 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.254 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.254 [2024-10-01 14:32:48.909581] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.254 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.254 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:57.254 "name": "raid_bdev1", 00:07:57.254 "aliases": [ 00:07:57.254 "cb31261f-39d5-43f1-b5b5-6eaaaf255752" 00:07:57.254 ], 00:07:57.254 "product_name": "Raid Volume", 00:07:57.254 "block_size": 512, 00:07:57.254 "num_blocks": 63488, 00:07:57.254 "uuid": "cb31261f-39d5-43f1-b5b5-6eaaaf255752", 00:07:57.254 "assigned_rate_limits": { 00:07:57.254 "rw_ios_per_sec": 0, 00:07:57.254 "rw_mbytes_per_sec": 0, 00:07:57.254 "r_mbytes_per_sec": 0, 00:07:57.254 "w_mbytes_per_sec": 0 00:07:57.254 }, 00:07:57.254 "claimed": false, 00:07:57.254 "zoned": false, 00:07:57.254 "supported_io_types": { 00:07:57.254 "read": true, 00:07:57.254 "write": true, 00:07:57.254 "unmap": false, 00:07:57.254 "flush": false, 00:07:57.254 "reset": true, 00:07:57.254 "nvme_admin": false, 00:07:57.254 "nvme_io": false, 00:07:57.254 "nvme_io_md": false, 00:07:57.254 "write_zeroes": true, 00:07:57.254 "zcopy": false, 00:07:57.254 "get_zone_info": false, 00:07:57.254 "zone_management": false, 00:07:57.254 "zone_append": false, 00:07:57.254 "compare": false, 00:07:57.254 "compare_and_write": false, 00:07:57.254 "abort": false, 00:07:57.254 "seek_hole": false, 00:07:57.254 "seek_data": false, 00:07:57.254 "copy": false, 00:07:57.254 "nvme_iov_md": false 00:07:57.254 }, 00:07:57.254 "memory_domains": [ 00:07:57.254 { 00:07:57.254 "dma_device_id": "system", 00:07:57.254 "dma_device_type": 1 00:07:57.254 }, 00:07:57.254 { 00:07:57.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.254 "dma_device_type": 2 00:07:57.254 }, 00:07:57.254 { 00:07:57.254 "dma_device_id": "system", 00:07:57.254 "dma_device_type": 1 00:07:57.254 }, 00:07:57.254 { 00:07:57.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.254 "dma_device_type": 2 00:07:57.254 }, 00:07:57.254 { 00:07:57.254 "dma_device_id": "system", 00:07:57.254 "dma_device_type": 1 00:07:57.254 }, 00:07:57.254 { 00:07:57.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.254 "dma_device_type": 2 00:07:57.254 } 00:07:57.254 ], 00:07:57.254 "driver_specific": { 00:07:57.254 "raid": { 00:07:57.254 "uuid": "cb31261f-39d5-43f1-b5b5-6eaaaf255752", 00:07:57.254 "strip_size_kb": 0, 00:07:57.254 "state": "online", 00:07:57.254 "raid_level": "raid1", 00:07:57.254 "superblock": true, 00:07:57.254 "num_base_bdevs": 3, 00:07:57.254 "num_base_bdevs_discovered": 3, 00:07:57.254 "num_base_bdevs_operational": 3, 00:07:57.254 "base_bdevs_list": [ 00:07:57.254 { 00:07:57.254 "name": "pt1", 00:07:57.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.254 "is_configured": true, 00:07:57.254 "data_offset": 2048, 00:07:57.254 "data_size": 63488 00:07:57.254 }, 00:07:57.254 { 00:07:57.254 "name": "pt2", 00:07:57.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.254 "is_configured": true, 00:07:57.254 "data_offset": 2048, 00:07:57.254 "data_size": 63488 00:07:57.254 }, 00:07:57.254 { 00:07:57.254 "name": "pt3", 00:07:57.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:57.254 "is_configured": true, 00:07:57.254 "data_offset": 2048, 00:07:57.254 "data_size": 63488 00:07:57.254 } 00:07:57.254 ] 00:07:57.254 } 00:07:57.254 } 00:07:57.254 }' 00:07:57.254 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.514 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:57.514 pt2 00:07:57.514 pt3' 00:07:57.514 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.514 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.514 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.514 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.514 14:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:57.514 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.514 14:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.514 [2024-10-01 14:32:49.105562] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cb31261f-39d5-43f1-b5b5-6eaaaf255752 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cb31261f-39d5-43f1-b5b5-6eaaaf255752 ']' 00:07:57.514 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.515 [2024-10-01 14:32:49.137261] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.515 [2024-10-01 14:32:49.137288] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.515 [2024-10-01 14:32:49.137368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.515 [2024-10-01 14:32:49.137445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.515 [2024-10-01 14:32:49.137455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.515 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.775 [2024-10-01 14:32:49.245335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:57.775 [2024-10-01 14:32:49.247224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:57.775 [2024-10-01 14:32:49.247280] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:57.775 [2024-10-01 14:32:49.247329] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:57.775 [2024-10-01 14:32:49.247371] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:57.775 [2024-10-01 14:32:49.247391] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:57.775 [2024-10-01 14:32:49.247409] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.775 [2024-10-01 14:32:49.247419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:57.775 request: 00:07:57.775 { 00:07:57.775 "name": "raid_bdev1", 00:07:57.775 "raid_level": "raid1", 00:07:57.775 "base_bdevs": [ 00:07:57.775 "malloc1", 00:07:57.775 "malloc2", 00:07:57.775 "malloc3" 00:07:57.775 ], 00:07:57.775 "superblock": false, 00:07:57.775 "method": "bdev_raid_create", 00:07:57.775 "req_id": 1 00:07:57.775 } 00:07:57.775 Got JSON-RPC error response 00:07:57.775 response: 00:07:57.775 { 00:07:57.775 "code": -17, 00:07:57.775 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:57.775 } 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.775 [2024-10-01 14:32:49.289311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:57.775 [2024-10-01 14:32:49.289369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.775 [2024-10-01 14:32:49.289390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:57.775 [2024-10-01 14:32:49.289399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.775 [2024-10-01 14:32:49.291551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.775 [2024-10-01 14:32:49.291582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:57.775 [2024-10-01 14:32:49.291659] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:57.775 [2024-10-01 14:32:49.291716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:57.775 pt1 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.775 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.776 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.776 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.776 "name": "raid_bdev1", 00:07:57.776 "uuid": "cb31261f-39d5-43f1-b5b5-6eaaaf255752", 00:07:57.776 "strip_size_kb": 0, 00:07:57.776 "state": "configuring", 00:07:57.776 "raid_level": "raid1", 00:07:57.776 "superblock": true, 00:07:57.776 "num_base_bdevs": 3, 00:07:57.776 "num_base_bdevs_discovered": 1, 00:07:57.776 "num_base_bdevs_operational": 3, 00:07:57.776 "base_bdevs_list": [ 00:07:57.776 { 00:07:57.776 "name": "pt1", 00:07:57.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.776 "is_configured": true, 00:07:57.776 "data_offset": 2048, 00:07:57.776 "data_size": 63488 00:07:57.776 }, 00:07:57.776 { 00:07:57.776 "name": null, 00:07:57.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.776 "is_configured": false, 00:07:57.776 "data_offset": 2048, 00:07:57.776 "data_size": 63488 00:07:57.776 }, 00:07:57.776 { 00:07:57.776 "name": null, 00:07:57.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:57.776 "is_configured": false, 00:07:57.776 "data_offset": 2048, 00:07:57.776 "data_size": 63488 00:07:57.776 } 00:07:57.776 ] 00:07:57.776 }' 00:07:57.776 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.776 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.083 [2024-10-01 14:32:49.601543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.083 [2024-10-01 14:32:49.601654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.083 [2024-10-01 14:32:49.601695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:07:58.083 [2024-10-01 14:32:49.601737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.083 [2024-10-01 14:32:49.602552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.083 [2024-10-01 14:32:49.602599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.083 [2024-10-01 14:32:49.602778] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:58.083 [2024-10-01 14:32:49.602821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.083 pt2 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.083 [2024-10-01 14:32:49.609442] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.083 "name": "raid_bdev1", 00:07:58.083 "uuid": "cb31261f-39d5-43f1-b5b5-6eaaaf255752", 00:07:58.083 "strip_size_kb": 0, 00:07:58.083 "state": "configuring", 00:07:58.083 "raid_level": "raid1", 00:07:58.083 "superblock": true, 00:07:58.083 "num_base_bdevs": 3, 00:07:58.083 "num_base_bdevs_discovered": 1, 00:07:58.083 "num_base_bdevs_operational": 3, 00:07:58.083 "base_bdevs_list": [ 00:07:58.083 { 00:07:58.083 "name": "pt1", 00:07:58.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.083 "is_configured": true, 00:07:58.083 "data_offset": 2048, 00:07:58.083 "data_size": 63488 00:07:58.083 }, 00:07:58.083 { 00:07:58.083 "name": null, 00:07:58.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.083 "is_configured": false, 00:07:58.083 "data_offset": 0, 00:07:58.083 "data_size": 63488 00:07:58.083 }, 00:07:58.083 { 00:07:58.083 "name": null, 00:07:58.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:58.083 "is_configured": false, 00:07:58.083 "data_offset": 2048, 00:07:58.083 "data_size": 63488 00:07:58.083 } 00:07:58.083 ] 00:07:58.083 }' 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.083 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.344 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:58.344 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:58.344 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.344 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.344 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.344 [2024-10-01 14:32:49.929480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.344 [2024-10-01 14:32:49.929538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.344 [2024-10-01 14:32:49.929555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:07:58.344 [2024-10-01 14:32:49.929567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.344 [2024-10-01 14:32:49.929988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.344 [2024-10-01 14:32:49.930011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.344 [2024-10-01 14:32:49.930086] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:58.344 [2024-10-01 14:32:49.930114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.344 pt2 00:07:58.344 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.344 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:58.344 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:58.344 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:58.344 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.344 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.344 [2024-10-01 14:32:49.937490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:58.344 [2024-10-01 14:32:49.937529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.344 [2024-10-01 14:32:49.937545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:07:58.344 [2024-10-01 14:32:49.937555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.344 [2024-10-01 14:32:49.937912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.344 [2024-10-01 14:32:49.937933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:58.344 [2024-10-01 14:32:49.937987] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:58.344 [2024-10-01 14:32:49.938005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:58.344 [2024-10-01 14:32:49.938117] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.344 [2024-10-01 14:32:49.938133] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:58.344 [2024-10-01 14:32:49.938358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:58.344 [2024-10-01 14:32:49.938496] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.345 [2024-10-01 14:32:49.938504] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:58.345 [2024-10-01 14:32:49.938628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.345 pt3 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.345 "name": "raid_bdev1", 00:07:58.345 "uuid": "cb31261f-39d5-43f1-b5b5-6eaaaf255752", 00:07:58.345 "strip_size_kb": 0, 00:07:58.345 "state": "online", 00:07:58.345 "raid_level": "raid1", 00:07:58.345 "superblock": true, 00:07:58.345 "num_base_bdevs": 3, 00:07:58.345 "num_base_bdevs_discovered": 3, 00:07:58.345 "num_base_bdevs_operational": 3, 00:07:58.345 "base_bdevs_list": [ 00:07:58.345 { 00:07:58.345 "name": "pt1", 00:07:58.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.345 "is_configured": true, 00:07:58.345 "data_offset": 2048, 00:07:58.345 "data_size": 63488 00:07:58.345 }, 00:07:58.345 { 00:07:58.345 "name": "pt2", 00:07:58.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.345 "is_configured": true, 00:07:58.345 "data_offset": 2048, 00:07:58.345 "data_size": 63488 00:07:58.345 }, 00:07:58.345 { 00:07:58.345 "name": "pt3", 00:07:58.345 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:58.345 "is_configured": true, 00:07:58.345 "data_offset": 2048, 00:07:58.345 "data_size": 63488 00:07:58.345 } 00:07:58.345 ] 00:07:58.345 }' 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.345 14:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.605 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:58.605 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:58.605 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.605 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.605 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.605 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.605 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.605 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.605 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.605 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.605 [2024-10-01 14:32:50.273925] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.605 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.864 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.865 "name": "raid_bdev1", 00:07:58.865 "aliases": [ 00:07:58.865 "cb31261f-39d5-43f1-b5b5-6eaaaf255752" 00:07:58.865 ], 00:07:58.865 "product_name": "Raid Volume", 00:07:58.865 "block_size": 512, 00:07:58.865 "num_blocks": 63488, 00:07:58.865 "uuid": "cb31261f-39d5-43f1-b5b5-6eaaaf255752", 00:07:58.865 "assigned_rate_limits": { 00:07:58.865 "rw_ios_per_sec": 0, 00:07:58.865 "rw_mbytes_per_sec": 0, 00:07:58.865 "r_mbytes_per_sec": 0, 00:07:58.865 "w_mbytes_per_sec": 0 00:07:58.865 }, 00:07:58.865 "claimed": false, 00:07:58.865 "zoned": false, 00:07:58.865 "supported_io_types": { 00:07:58.865 "read": true, 00:07:58.865 "write": true, 00:07:58.865 "unmap": false, 00:07:58.865 "flush": false, 00:07:58.865 "reset": true, 00:07:58.865 "nvme_admin": false, 00:07:58.865 "nvme_io": false, 00:07:58.865 "nvme_io_md": false, 00:07:58.865 "write_zeroes": true, 00:07:58.865 "zcopy": false, 00:07:58.865 "get_zone_info": false, 00:07:58.865 "zone_management": false, 00:07:58.865 "zone_append": false, 00:07:58.865 "compare": false, 00:07:58.865 "compare_and_write": false, 00:07:58.865 "abort": false, 00:07:58.865 "seek_hole": false, 00:07:58.865 "seek_data": false, 00:07:58.865 "copy": false, 00:07:58.865 "nvme_iov_md": false 00:07:58.865 }, 00:07:58.865 "memory_domains": [ 00:07:58.865 { 00:07:58.865 "dma_device_id": "system", 00:07:58.865 "dma_device_type": 1 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.865 "dma_device_type": 2 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "dma_device_id": "system", 00:07:58.865 "dma_device_type": 1 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.865 "dma_device_type": 2 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "dma_device_id": "system", 00:07:58.865 "dma_device_type": 1 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.865 "dma_device_type": 2 00:07:58.865 } 00:07:58.865 ], 00:07:58.865 "driver_specific": { 00:07:58.865 "raid": { 00:07:58.865 "uuid": "cb31261f-39d5-43f1-b5b5-6eaaaf255752", 00:07:58.865 "strip_size_kb": 0, 00:07:58.865 "state": "online", 00:07:58.865 "raid_level": "raid1", 00:07:58.865 "superblock": true, 00:07:58.865 "num_base_bdevs": 3, 00:07:58.865 "num_base_bdevs_discovered": 3, 00:07:58.865 "num_base_bdevs_operational": 3, 00:07:58.865 "base_bdevs_list": [ 00:07:58.865 { 00:07:58.865 "name": "pt1", 00:07:58.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.865 "is_configured": true, 00:07:58.865 "data_offset": 2048, 00:07:58.865 "data_size": 63488 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "name": "pt2", 00:07:58.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.865 "is_configured": true, 00:07:58.865 "data_offset": 2048, 00:07:58.865 "data_size": 63488 00:07:58.865 }, 00:07:58.865 { 00:07:58.865 "name": "pt3", 00:07:58.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:58.865 "is_configured": true, 00:07:58.865 "data_offset": 2048, 00:07:58.865 "data_size": 63488 00:07:58.865 } 00:07:58.865 ] 00:07:58.865 } 00:07:58.865 } 00:07:58.865 }' 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:58.865 pt2 00:07:58.865 pt3' 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:58.865 [2024-10-01 14:32:50.465894] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cb31261f-39d5-43f1-b5b5-6eaaaf255752 '!=' cb31261f-39d5-43f1-b5b5-6eaaaf255752 ']' 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.865 [2024-10-01 14:32:50.497645] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:58.865 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.866 "name": "raid_bdev1", 00:07:58.866 "uuid": "cb31261f-39d5-43f1-b5b5-6eaaaf255752", 00:07:58.866 "strip_size_kb": 0, 00:07:58.866 "state": "online", 00:07:58.866 "raid_level": "raid1", 00:07:58.866 "superblock": true, 00:07:58.866 "num_base_bdevs": 3, 00:07:58.866 "num_base_bdevs_discovered": 2, 00:07:58.866 "num_base_bdevs_operational": 2, 00:07:58.866 "base_bdevs_list": [ 00:07:58.866 { 00:07:58.866 "name": null, 00:07:58.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.866 "is_configured": false, 00:07:58.866 "data_offset": 0, 00:07:58.866 "data_size": 63488 00:07:58.866 }, 00:07:58.866 { 00:07:58.866 "name": "pt2", 00:07:58.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.866 "is_configured": true, 00:07:58.866 "data_offset": 2048, 00:07:58.866 "data_size": 63488 00:07:58.866 }, 00:07:58.866 { 00:07:58.866 "name": "pt3", 00:07:58.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:58.866 "is_configured": true, 00:07:58.866 "data_offset": 2048, 00:07:58.866 "data_size": 63488 00:07:58.866 } 00:07:58.866 ] 00:07:58.866 }' 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.866 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.126 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.126 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.126 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.386 [2024-10-01 14:32:50.809700] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.386 [2024-10-01 14:32:50.809735] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.386 [2024-10-01 14:32:50.809798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.387 [2024-10-01 14:32:50.809854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.387 [2024-10-01 14:32:50.809867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.387 [2024-10-01 14:32:50.865689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:59.387 [2024-10-01 14:32:50.865741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.387 [2024-10-01 14:32:50.865755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:07:59.387 [2024-10-01 14:32:50.865765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.387 [2024-10-01 14:32:50.867883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.387 [2024-10-01 14:32:50.867917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:59.387 [2024-10-01 14:32:50.867983] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:59.387 [2024-10-01 14:32:50.868025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:59.387 pt2 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.387 "name": "raid_bdev1", 00:07:59.387 "uuid": "cb31261f-39d5-43f1-b5b5-6eaaaf255752", 00:07:59.387 "strip_size_kb": 0, 00:07:59.387 "state": "configuring", 00:07:59.387 "raid_level": "raid1", 00:07:59.387 "superblock": true, 00:07:59.387 "num_base_bdevs": 3, 00:07:59.387 "num_base_bdevs_discovered": 1, 00:07:59.387 "num_base_bdevs_operational": 2, 00:07:59.387 "base_bdevs_list": [ 00:07:59.387 { 00:07:59.387 "name": null, 00:07:59.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.387 "is_configured": false, 00:07:59.387 "data_offset": 2048, 00:07:59.387 "data_size": 63488 00:07:59.387 }, 00:07:59.387 { 00:07:59.387 "name": "pt2", 00:07:59.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.387 "is_configured": true, 00:07:59.387 "data_offset": 2048, 00:07:59.387 "data_size": 63488 00:07:59.387 }, 00:07:59.387 { 00:07:59.387 "name": null, 00:07:59.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:59.387 "is_configured": false, 00:07:59.387 "data_offset": 2048, 00:07:59.387 "data_size": 63488 00:07:59.387 } 00:07:59.387 ] 00:07:59.387 }' 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.387 14:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.647 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:07:59.647 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:59.647 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:07:59.647 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:59.647 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.647 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.647 [2024-10-01 14:32:51.189805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:59.647 [2024-10-01 14:32:51.189856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.647 [2024-10-01 14:32:51.189874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:07:59.647 [2024-10-01 14:32:51.189885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.647 [2024-10-01 14:32:51.190284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.647 [2024-10-01 14:32:51.190300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:59.647 [2024-10-01 14:32:51.190366] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:59.647 [2024-10-01 14:32:51.190389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:59.647 [2024-10-01 14:32:51.190486] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:59.647 [2024-10-01 14:32:51.190497] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:59.648 [2024-10-01 14:32:51.190738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:59.648 [2024-10-01 14:32:51.190872] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:59.648 [2024-10-01 14:32:51.190881] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:59.648 [2024-10-01 14:32:51.191004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.648 pt3 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.648 "name": "raid_bdev1", 00:07:59.648 "uuid": "cb31261f-39d5-43f1-b5b5-6eaaaf255752", 00:07:59.648 "strip_size_kb": 0, 00:07:59.648 "state": "online", 00:07:59.648 "raid_level": "raid1", 00:07:59.648 "superblock": true, 00:07:59.648 "num_base_bdevs": 3, 00:07:59.648 "num_base_bdevs_discovered": 2, 00:07:59.648 "num_base_bdevs_operational": 2, 00:07:59.648 "base_bdevs_list": [ 00:07:59.648 { 00:07:59.648 "name": null, 00:07:59.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.648 "is_configured": false, 00:07:59.648 "data_offset": 2048, 00:07:59.648 "data_size": 63488 00:07:59.648 }, 00:07:59.648 { 00:07:59.648 "name": "pt2", 00:07:59.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.648 "is_configured": true, 00:07:59.648 "data_offset": 2048, 00:07:59.648 "data_size": 63488 00:07:59.648 }, 00:07:59.648 { 00:07:59.648 "name": "pt3", 00:07:59.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:59.648 "is_configured": true, 00:07:59.648 "data_offset": 2048, 00:07:59.648 "data_size": 63488 00:07:59.648 } 00:07:59.648 ] 00:07:59.648 }' 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.648 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.909 [2024-10-01 14:32:51.517873] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.909 [2024-10-01 14:32:51.517903] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.909 [2024-10-01 14:32:51.517964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.909 [2024-10-01 14:32:51.518021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.909 [2024-10-01 14:32:51.518030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.909 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.909 [2024-10-01 14:32:51.569903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:59.909 [2024-10-01 14:32:51.569949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.909 [2024-10-01 14:32:51.569968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:07:59.910 [2024-10-01 14:32:51.569976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.910 [2024-10-01 14:32:51.572133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.910 [2024-10-01 14:32:51.572162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:59.910 [2024-10-01 14:32:51.572234] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:59.910 [2024-10-01 14:32:51.572272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:59.910 [2024-10-01 14:32:51.572383] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:59.910 [2024-10-01 14:32:51.572403] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.910 [2024-10-01 14:32:51.572421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:59.910 [2024-10-01 14:32:51.572466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:59.910 pt1 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.910 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.171 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.171 "name": "raid_bdev1", 00:08:00.171 "uuid": "cb31261f-39d5-43f1-b5b5-6eaaaf255752", 00:08:00.171 "strip_size_kb": 0, 00:08:00.171 "state": "configuring", 00:08:00.171 "raid_level": "raid1", 00:08:00.171 "superblock": true, 00:08:00.171 "num_base_bdevs": 3, 00:08:00.171 "num_base_bdevs_discovered": 1, 00:08:00.171 "num_base_bdevs_operational": 2, 00:08:00.171 "base_bdevs_list": [ 00:08:00.171 { 00:08:00.171 "name": null, 00:08:00.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.171 "is_configured": false, 00:08:00.171 "data_offset": 2048, 00:08:00.171 "data_size": 63488 00:08:00.171 }, 00:08:00.171 { 00:08:00.171 "name": "pt2", 00:08:00.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.171 "is_configured": true, 00:08:00.171 "data_offset": 2048, 00:08:00.171 "data_size": 63488 00:08:00.171 }, 00:08:00.171 { 00:08:00.171 "name": null, 00:08:00.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:00.171 "is_configured": false, 00:08:00.171 "data_offset": 2048, 00:08:00.171 "data_size": 63488 00:08:00.171 } 00:08:00.171 ] 00:08:00.171 }' 00:08:00.171 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.171 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.433 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:00.433 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:08:00.433 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.433 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.433 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.433 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:08:00.433 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:00.433 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.433 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.433 [2024-10-01 14:32:51.942005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:00.433 [2024-10-01 14:32:51.942054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.433 [2024-10-01 14:32:51.942073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:08:00.433 [2024-10-01 14:32:51.942082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.433 [2024-10-01 14:32:51.942467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.433 [2024-10-01 14:32:51.942481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:00.433 [2024-10-01 14:32:51.942550] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:00.433 [2024-10-01 14:32:51.942585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:00.434 [2024-10-01 14:32:51.942694] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:00.434 [2024-10-01 14:32:51.942716] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:00.434 [2024-10-01 14:32:51.942968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:00.434 [2024-10-01 14:32:51.943099] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:00.434 [2024-10-01 14:32:51.943111] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:00.434 [2024-10-01 14:32:51.943232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.434 pt3 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.434 "name": "raid_bdev1", 00:08:00.434 "uuid": "cb31261f-39d5-43f1-b5b5-6eaaaf255752", 00:08:00.434 "strip_size_kb": 0, 00:08:00.434 "state": "online", 00:08:00.434 "raid_level": "raid1", 00:08:00.434 "superblock": true, 00:08:00.434 "num_base_bdevs": 3, 00:08:00.434 "num_base_bdevs_discovered": 2, 00:08:00.434 "num_base_bdevs_operational": 2, 00:08:00.434 "base_bdevs_list": [ 00:08:00.434 { 00:08:00.434 "name": null, 00:08:00.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.434 "is_configured": false, 00:08:00.434 "data_offset": 2048, 00:08:00.434 "data_size": 63488 00:08:00.434 }, 00:08:00.434 { 00:08:00.434 "name": "pt2", 00:08:00.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.434 "is_configured": true, 00:08:00.434 "data_offset": 2048, 00:08:00.434 "data_size": 63488 00:08:00.434 }, 00:08:00.434 { 00:08:00.434 "name": "pt3", 00:08:00.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:00.434 "is_configured": true, 00:08:00.434 "data_offset": 2048, 00:08:00.434 "data_size": 63488 00:08:00.434 } 00:08:00.434 ] 00:08:00.434 }' 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.434 14:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:00.695 [2024-10-01 14:32:52.290353] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cb31261f-39d5-43f1-b5b5-6eaaaf255752 '!=' cb31261f-39d5-43f1-b5b5-6eaaaf255752 ']' 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67168 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 67168 ']' 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 67168 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67168 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:00.695 killing process with pid 67168 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67168' 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 67168 00:08:00.695 [2024-10-01 14:32:52.329932] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.695 14:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 67168 00:08:00.695 [2024-10-01 14:32:52.330022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.695 [2024-10-01 14:32:52.330078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.695 [2024-10-01 14:32:52.330088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:00.956 [2024-10-01 14:32:52.516792] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.899 14:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:01.899 00:08:01.899 real 0m5.801s 00:08:01.899 user 0m8.988s 00:08:01.899 sys 0m0.925s 00:08:01.899 14:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.899 ************************************ 00:08:01.899 END TEST raid_superblock_test 00:08:01.899 ************************************ 00:08:01.899 14:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.899 14:32:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:08:01.899 14:32:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:01.899 14:32:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.899 14:32:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.899 ************************************ 00:08:01.899 START TEST raid_read_error_test 00:08:01.899 ************************************ 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RttJWVGyTf 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67592 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67592 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 67592 ']' 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.899 14:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:01.900 [2024-10-01 14:32:53.454425] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:01.900 [2024-10-01 14:32:53.454538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67592 ] 00:08:02.161 [2024-10-01 14:32:53.605765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.161 [2024-10-01 14:32:53.794235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.433 [2024-10-01 14:32:53.932646] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.433 [2024-10-01 14:32:53.932715] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.694 BaseBdev1_malloc 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.694 true 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.694 [2024-10-01 14:32:54.350207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:02.694 [2024-10-01 14:32:54.350254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.694 [2024-10-01 14:32:54.350272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:02.694 [2024-10-01 14:32:54.350283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.694 [2024-10-01 14:32:54.352391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.694 [2024-10-01 14:32:54.352426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:02.694 BaseBdev1 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.694 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.955 BaseBdev2_malloc 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.955 true 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.955 [2024-10-01 14:32:54.411612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:02.955 [2024-10-01 14:32:54.411660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.955 [2024-10-01 14:32:54.411676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:02.955 [2024-10-01 14:32:54.411687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.955 [2024-10-01 14:32:54.413797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.955 [2024-10-01 14:32:54.413827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:02.955 BaseBdev2 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.955 BaseBdev3_malloc 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.955 true 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.955 [2024-10-01 14:32:54.455368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:02.955 [2024-10-01 14:32:54.455408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.955 [2024-10-01 14:32:54.455423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:02.955 [2024-10-01 14:32:54.455433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.955 [2024-10-01 14:32:54.457529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.955 [2024-10-01 14:32:54.457558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:02.955 BaseBdev3 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.955 [2024-10-01 14:32:54.463437] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.955 [2024-10-01 14:32:54.465254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.955 [2024-10-01 14:32:54.465368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:02.955 [2024-10-01 14:32:54.465573] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:02.955 [2024-10-01 14:32:54.465591] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:02.955 [2024-10-01 14:32:54.465850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:02.955 [2024-10-01 14:32:54.466007] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:02.955 [2024-10-01 14:32:54.466024] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:02.955 [2024-10-01 14:32:54.466161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.955 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.956 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.956 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.956 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.956 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.956 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.956 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.956 "name": "raid_bdev1", 00:08:02.956 "uuid": "5f20cc46-74d3-46c1-94a1-5f3073b1075d", 00:08:02.956 "strip_size_kb": 0, 00:08:02.956 "state": "online", 00:08:02.956 "raid_level": "raid1", 00:08:02.956 "superblock": true, 00:08:02.956 "num_base_bdevs": 3, 00:08:02.956 "num_base_bdevs_discovered": 3, 00:08:02.956 "num_base_bdevs_operational": 3, 00:08:02.956 "base_bdevs_list": [ 00:08:02.956 { 00:08:02.956 "name": "BaseBdev1", 00:08:02.956 "uuid": "603c678c-1b1f-5c89-84cb-7c8f248da160", 00:08:02.956 "is_configured": true, 00:08:02.956 "data_offset": 2048, 00:08:02.956 "data_size": 63488 00:08:02.956 }, 00:08:02.956 { 00:08:02.956 "name": "BaseBdev2", 00:08:02.956 "uuid": "d69d3d21-87a3-5064-a915-be6d2b462d91", 00:08:02.956 "is_configured": true, 00:08:02.956 "data_offset": 2048, 00:08:02.956 "data_size": 63488 00:08:02.956 }, 00:08:02.956 { 00:08:02.956 "name": "BaseBdev3", 00:08:02.956 "uuid": "1dcbc1d9-58e3-5ac7-856c-9a5cf136b516", 00:08:02.956 "is_configured": true, 00:08:02.956 "data_offset": 2048, 00:08:02.956 "data_size": 63488 00:08:02.956 } 00:08:02.956 ] 00:08:02.956 }' 00:08:02.956 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.956 14:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.216 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:03.216 14:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:03.477 [2024-10-01 14:32:54.908512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.418 "name": "raid_bdev1", 00:08:04.418 "uuid": "5f20cc46-74d3-46c1-94a1-5f3073b1075d", 00:08:04.418 "strip_size_kb": 0, 00:08:04.418 "state": "online", 00:08:04.418 "raid_level": "raid1", 00:08:04.418 "superblock": true, 00:08:04.418 "num_base_bdevs": 3, 00:08:04.418 "num_base_bdevs_discovered": 3, 00:08:04.418 "num_base_bdevs_operational": 3, 00:08:04.418 "base_bdevs_list": [ 00:08:04.418 { 00:08:04.418 "name": "BaseBdev1", 00:08:04.418 "uuid": "603c678c-1b1f-5c89-84cb-7c8f248da160", 00:08:04.418 "is_configured": true, 00:08:04.418 "data_offset": 2048, 00:08:04.418 "data_size": 63488 00:08:04.418 }, 00:08:04.418 { 00:08:04.418 "name": "BaseBdev2", 00:08:04.418 "uuid": "d69d3d21-87a3-5064-a915-be6d2b462d91", 00:08:04.418 "is_configured": true, 00:08:04.418 "data_offset": 2048, 00:08:04.418 "data_size": 63488 00:08:04.418 }, 00:08:04.418 { 00:08:04.418 "name": "BaseBdev3", 00:08:04.418 "uuid": "1dcbc1d9-58e3-5ac7-856c-9a5cf136b516", 00:08:04.418 "is_configured": true, 00:08:04.418 "data_offset": 2048, 00:08:04.418 "data_size": 63488 00:08:04.418 } 00:08:04.418 ] 00:08:04.418 }' 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.418 14:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.678 14:32:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.678 14:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.678 14:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.678 [2024-10-01 14:32:56.184282] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.678 [2024-10-01 14:32:56.184316] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.678 [2024-10-01 14:32:56.187385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.678 [2024-10-01 14:32:56.187432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.678 [2024-10-01 14:32:56.187538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.678 [2024-10-01 14:32:56.187553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:04.678 { 00:08:04.678 "results": [ 00:08:04.678 { 00:08:04.678 "job": "raid_bdev1", 00:08:04.678 "core_mask": "0x1", 00:08:04.678 "workload": "randrw", 00:08:04.678 "percentage": 50, 00:08:04.678 "status": "finished", 00:08:04.678 "queue_depth": 1, 00:08:04.678 "io_size": 131072, 00:08:04.678 "runtime": 1.273936, 00:08:04.678 "iops": 14025.822333304028, 00:08:04.678 "mibps": 1753.2277916630035, 00:08:04.678 "io_failed": 0, 00:08:04.678 "io_timeout": 0, 00:08:04.678 "avg_latency_us": 68.15019682802088, 00:08:04.678 "min_latency_us": 29.53846153846154, 00:08:04.678 "max_latency_us": 1688.8123076923077 00:08:04.678 } 00:08:04.678 ], 00:08:04.678 "core_count": 1 00:08:04.678 } 00:08:04.678 14:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.678 14:32:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67592 00:08:04.678 14:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 67592 ']' 00:08:04.679 14:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 67592 00:08:04.679 14:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:04.679 14:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.679 14:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67592 00:08:04.679 14:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.679 14:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.679 killing process with pid 67592 00:08:04.679 14:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67592' 00:08:04.679 14:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 67592 00:08:04.679 [2024-10-01 14:32:56.219152] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.679 14:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 67592 00:08:04.938 [2024-10-01 14:32:56.363314] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:05.878 14:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RttJWVGyTf 00:08:05.878 14:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:05.878 14:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:05.878 14:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:05.878 14:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:05.878 14:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.878 14:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:05.878 14:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:05.878 00:08:05.878 real 0m3.856s 00:08:05.878 user 0m4.574s 00:08:05.878 sys 0m0.416s 00:08:05.878 14:32:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.878 14:32:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.878 ************************************ 00:08:05.878 END TEST raid_read_error_test 00:08:05.878 ************************************ 00:08:05.878 14:32:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:08:05.878 14:32:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:05.878 14:32:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.878 14:32:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:05.878 ************************************ 00:08:05.878 START TEST raid_write_error_test 00:08:05.878 ************************************ 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:05.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Wm1HdzytZu 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67732 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67732 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67732 ']' 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.878 14:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.878 [2024-10-01 14:32:57.367349] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:05.878 [2024-10-01 14:32:57.367950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67732 ] 00:08:05.878 [2024-10-01 14:32:57.517206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.138 [2024-10-01 14:32:57.707113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.398 [2024-10-01 14:32:57.843569] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.398 [2024-10-01 14:32:57.843764] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.659 BaseBdev1_malloc 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.659 true 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.659 [2024-10-01 14:32:58.269024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:06.659 [2024-10-01 14:32:58.269092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.659 [2024-10-01 14:32:58.269119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:06.659 [2024-10-01 14:32:58.269135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.659 [2024-10-01 14:32:58.272131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.659 [2024-10-01 14:32:58.272301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:06.659 BaseBdev1 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.659 BaseBdev2_malloc 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.659 true 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.659 [2024-10-01 14:32:58.326234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:06.659 [2024-10-01 14:32:58.326289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.659 [2024-10-01 14:32:58.326307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:06.659 [2024-10-01 14:32:58.326317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.659 [2024-10-01 14:32:58.328445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.659 [2024-10-01 14:32:58.328583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:06.659 BaseBdev2 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.659 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.920 BaseBdev3_malloc 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.920 true 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.920 [2024-10-01 14:32:58.370274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:06.920 [2024-10-01 14:32:58.370325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.920 [2024-10-01 14:32:58.370344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:06.920 [2024-10-01 14:32:58.370356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.920 [2024-10-01 14:32:58.372487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.920 [2024-10-01 14:32:58.372521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:06.920 BaseBdev3 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.920 [2024-10-01 14:32:58.378343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.920 [2024-10-01 14:32:58.380265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.920 [2024-10-01 14:32:58.380338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:06.920 [2024-10-01 14:32:58.380542] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:06.920 [2024-10-01 14:32:58.380552] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:06.920 [2024-10-01 14:32:58.380819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:06.920 [2024-10-01 14:32:58.380969] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:06.920 [2024-10-01 14:32:58.380980] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:06.920 [2024-10-01 14:32:58.381117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.920 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.920 "name": "raid_bdev1", 00:08:06.920 "uuid": "b6bbcb70-135a-4856-bfea-2c3097d657fa", 00:08:06.920 "strip_size_kb": 0, 00:08:06.920 "state": "online", 00:08:06.920 "raid_level": "raid1", 00:08:06.920 "superblock": true, 00:08:06.920 "num_base_bdevs": 3, 00:08:06.920 "num_base_bdevs_discovered": 3, 00:08:06.920 "num_base_bdevs_operational": 3, 00:08:06.920 "base_bdevs_list": [ 00:08:06.920 { 00:08:06.920 "name": "BaseBdev1", 00:08:06.920 "uuid": "7031885b-d7b5-5d9b-99df-4d065eb5a0f3", 00:08:06.920 "is_configured": true, 00:08:06.920 "data_offset": 2048, 00:08:06.920 "data_size": 63488 00:08:06.920 }, 00:08:06.920 { 00:08:06.920 "name": "BaseBdev2", 00:08:06.920 "uuid": "9ff1954b-123b-5920-9b18-177667a0920c", 00:08:06.920 "is_configured": true, 00:08:06.920 "data_offset": 2048, 00:08:06.921 "data_size": 63488 00:08:06.921 }, 00:08:06.921 { 00:08:06.921 "name": "BaseBdev3", 00:08:06.921 "uuid": "1cbe90ee-06b3-5bcf-ba07-b7b134183636", 00:08:06.921 "is_configured": true, 00:08:06.921 "data_offset": 2048, 00:08:06.921 "data_size": 63488 00:08:06.921 } 00:08:06.921 ] 00:08:06.921 }' 00:08:06.921 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.921 14:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.181 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:07.181 14:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:07.181 [2024-10-01 14:32:58.795340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.126 [2024-10-01 14:32:59.720086] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:08.126 [2024-10-01 14:32:59.720264] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:08.126 [2024-10-01 14:32:59.720579] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.126 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.127 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.127 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.127 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.127 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.127 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.127 14:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.127 14:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.127 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.127 14:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.127 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.127 "name": "raid_bdev1", 00:08:08.127 "uuid": "b6bbcb70-135a-4856-bfea-2c3097d657fa", 00:08:08.127 "strip_size_kb": 0, 00:08:08.127 "state": "online", 00:08:08.127 "raid_level": "raid1", 00:08:08.127 "superblock": true, 00:08:08.127 "num_base_bdevs": 3, 00:08:08.127 "num_base_bdevs_discovered": 2, 00:08:08.127 "num_base_bdevs_operational": 2, 00:08:08.127 "base_bdevs_list": [ 00:08:08.127 { 00:08:08.127 "name": null, 00:08:08.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.127 "is_configured": false, 00:08:08.127 "data_offset": 0, 00:08:08.127 "data_size": 63488 00:08:08.127 }, 00:08:08.127 { 00:08:08.127 "name": "BaseBdev2", 00:08:08.127 "uuid": "9ff1954b-123b-5920-9b18-177667a0920c", 00:08:08.127 "is_configured": true, 00:08:08.127 "data_offset": 2048, 00:08:08.127 "data_size": 63488 00:08:08.127 }, 00:08:08.127 { 00:08:08.127 "name": "BaseBdev3", 00:08:08.127 "uuid": "1cbe90ee-06b3-5bcf-ba07-b7b134183636", 00:08:08.127 "is_configured": true, 00:08:08.127 "data_offset": 2048, 00:08:08.127 "data_size": 63488 00:08:08.127 } 00:08:08.127 ] 00:08:08.127 }' 00:08:08.127 14:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.127 14:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.699 [2024-10-01 14:33:00.086860] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.699 [2024-10-01 14:33:00.086897] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.699 [2024-10-01 14:33:00.089938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.699 [2024-10-01 14:33:00.089988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.699 [2024-10-01 14:33:00.090075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.699 [2024-10-01 14:33:00.090089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:08.699 { 00:08:08.699 "results": [ 00:08:08.699 { 00:08:08.699 "job": "raid_bdev1", 00:08:08.699 "core_mask": "0x1", 00:08:08.699 "workload": "randrw", 00:08:08.699 "percentage": 50, 00:08:08.699 "status": "finished", 00:08:08.699 "queue_depth": 1, 00:08:08.699 "io_size": 131072, 00:08:08.699 "runtime": 1.289625, 00:08:08.699 "iops": 15182.708151594456, 00:08:08.699 "mibps": 1897.838518949307, 00:08:08.699 "io_failed": 0, 00:08:08.699 "io_timeout": 0, 00:08:08.699 "avg_latency_us": 62.680107488017605, 00:08:08.699 "min_latency_us": 29.341538461538462, 00:08:08.699 "max_latency_us": 1714.0184615384615 00:08:08.699 } 00:08:08.699 ], 00:08:08.699 "core_count": 1 00:08:08.699 } 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67732 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67732 ']' 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67732 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67732 00:08:08.699 killing process with pid 67732 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67732' 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67732 00:08:08.699 [2024-10-01 14:33:00.119083] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.699 14:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67732 00:08:08.699 [2024-10-01 14:33:00.262625] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.643 14:33:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:09.643 14:33:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Wm1HdzytZu 00:08:09.643 14:33:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:09.643 14:33:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:09.643 14:33:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:09.643 ************************************ 00:08:09.643 END TEST raid_write_error_test 00:08:09.643 ************************************ 00:08:09.643 14:33:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.643 14:33:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:09.643 14:33:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:09.643 00:08:09.643 real 0m3.840s 00:08:09.643 user 0m4.571s 00:08:09.643 sys 0m0.399s 00:08:09.643 14:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.643 14:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.643 14:33:01 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:09.643 14:33:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:09.643 14:33:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:08:09.643 14:33:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:09.643 14:33:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.643 14:33:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.643 ************************************ 00:08:09.643 START TEST raid_state_function_test 00:08:09.643 ************************************ 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.643 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:09.644 Process raid pid: 67870 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67870 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67870' 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67870 00:08:09.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67870 ']' 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.644 14:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.644 [2024-10-01 14:33:01.278956] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:09.644 [2024-10-01 14:33:01.279102] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.925 [2024-10-01 14:33:01.431503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.189 [2024-10-01 14:33:01.623235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.189 [2024-10-01 14:33:01.760673] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.189 [2024-10-01 14:33:01.760727] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.451 [2024-10-01 14:33:02.114394] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.451 [2024-10-01 14:33:02.114443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.451 [2024-10-01 14:33:02.114454] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.451 [2024-10-01 14:33:02.114463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.451 [2024-10-01 14:33:02.114470] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.451 [2024-10-01 14:33:02.114478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.451 [2024-10-01 14:33:02.114488] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:10.451 [2024-10-01 14:33:02.114496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.451 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.712 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.712 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.712 "name": "Existed_Raid", 00:08:10.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.712 "strip_size_kb": 64, 00:08:10.712 "state": "configuring", 00:08:10.712 "raid_level": "raid0", 00:08:10.712 "superblock": false, 00:08:10.712 "num_base_bdevs": 4, 00:08:10.712 "num_base_bdevs_discovered": 0, 00:08:10.712 "num_base_bdevs_operational": 4, 00:08:10.712 "base_bdevs_list": [ 00:08:10.712 { 00:08:10.712 "name": "BaseBdev1", 00:08:10.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.712 "is_configured": false, 00:08:10.712 "data_offset": 0, 00:08:10.712 "data_size": 0 00:08:10.712 }, 00:08:10.712 { 00:08:10.712 "name": "BaseBdev2", 00:08:10.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.712 "is_configured": false, 00:08:10.712 "data_offset": 0, 00:08:10.712 "data_size": 0 00:08:10.712 }, 00:08:10.712 { 00:08:10.712 "name": "BaseBdev3", 00:08:10.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.712 "is_configured": false, 00:08:10.712 "data_offset": 0, 00:08:10.712 "data_size": 0 00:08:10.712 }, 00:08:10.712 { 00:08:10.712 "name": "BaseBdev4", 00:08:10.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.712 "is_configured": false, 00:08:10.712 "data_offset": 0, 00:08:10.712 "data_size": 0 00:08:10.712 } 00:08:10.712 ] 00:08:10.713 }' 00:08:10.713 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.713 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.973 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.973 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.973 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.973 [2024-10-01 14:33:02.430394] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.973 [2024-10-01 14:33:02.430430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:10.973 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.973 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:10.973 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.973 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.973 [2024-10-01 14:33:02.438411] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.974 [2024-10-01 14:33:02.438448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.974 [2024-10-01 14:33:02.438456] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.974 [2024-10-01 14:33:02.438465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.974 [2024-10-01 14:33:02.438471] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.974 [2024-10-01 14:33:02.438479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.974 [2024-10-01 14:33:02.438485] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:10.974 [2024-10-01 14:33:02.438494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.974 [2024-10-01 14:33:02.484550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.974 BaseBdev1 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.974 [ 00:08:10.974 { 00:08:10.974 "name": "BaseBdev1", 00:08:10.974 "aliases": [ 00:08:10.974 "d271d082-3981-4ae3-bc71-2a49034c0af9" 00:08:10.974 ], 00:08:10.974 "product_name": "Malloc disk", 00:08:10.974 "block_size": 512, 00:08:10.974 "num_blocks": 65536, 00:08:10.974 "uuid": "d271d082-3981-4ae3-bc71-2a49034c0af9", 00:08:10.974 "assigned_rate_limits": { 00:08:10.974 "rw_ios_per_sec": 0, 00:08:10.974 "rw_mbytes_per_sec": 0, 00:08:10.974 "r_mbytes_per_sec": 0, 00:08:10.974 "w_mbytes_per_sec": 0 00:08:10.974 }, 00:08:10.974 "claimed": true, 00:08:10.974 "claim_type": "exclusive_write", 00:08:10.974 "zoned": false, 00:08:10.974 "supported_io_types": { 00:08:10.974 "read": true, 00:08:10.974 "write": true, 00:08:10.974 "unmap": true, 00:08:10.974 "flush": true, 00:08:10.974 "reset": true, 00:08:10.974 "nvme_admin": false, 00:08:10.974 "nvme_io": false, 00:08:10.974 "nvme_io_md": false, 00:08:10.974 "write_zeroes": true, 00:08:10.974 "zcopy": true, 00:08:10.974 "get_zone_info": false, 00:08:10.974 "zone_management": false, 00:08:10.974 "zone_append": false, 00:08:10.974 "compare": false, 00:08:10.974 "compare_and_write": false, 00:08:10.974 "abort": true, 00:08:10.974 "seek_hole": false, 00:08:10.974 "seek_data": false, 00:08:10.974 "copy": true, 00:08:10.974 "nvme_iov_md": false 00:08:10.974 }, 00:08:10.974 "memory_domains": [ 00:08:10.974 { 00:08:10.974 "dma_device_id": "system", 00:08:10.974 "dma_device_type": 1 00:08:10.974 }, 00:08:10.974 { 00:08:10.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.974 "dma_device_type": 2 00:08:10.974 } 00:08:10.974 ], 00:08:10.974 "driver_specific": {} 00:08:10.974 } 00:08:10.974 ] 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.974 "name": "Existed_Raid", 00:08:10.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.974 "strip_size_kb": 64, 00:08:10.974 "state": "configuring", 00:08:10.974 "raid_level": "raid0", 00:08:10.974 "superblock": false, 00:08:10.974 "num_base_bdevs": 4, 00:08:10.974 "num_base_bdevs_discovered": 1, 00:08:10.974 "num_base_bdevs_operational": 4, 00:08:10.974 "base_bdevs_list": [ 00:08:10.974 { 00:08:10.974 "name": "BaseBdev1", 00:08:10.974 "uuid": "d271d082-3981-4ae3-bc71-2a49034c0af9", 00:08:10.974 "is_configured": true, 00:08:10.974 "data_offset": 0, 00:08:10.974 "data_size": 65536 00:08:10.974 }, 00:08:10.974 { 00:08:10.974 "name": "BaseBdev2", 00:08:10.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.974 "is_configured": false, 00:08:10.974 "data_offset": 0, 00:08:10.974 "data_size": 0 00:08:10.974 }, 00:08:10.974 { 00:08:10.974 "name": "BaseBdev3", 00:08:10.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.974 "is_configured": false, 00:08:10.974 "data_offset": 0, 00:08:10.974 "data_size": 0 00:08:10.974 }, 00:08:10.974 { 00:08:10.974 "name": "BaseBdev4", 00:08:10.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.974 "is_configured": false, 00:08:10.974 "data_offset": 0, 00:08:10.974 "data_size": 0 00:08:10.974 } 00:08:10.974 ] 00:08:10.974 }' 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.974 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.236 [2024-10-01 14:33:02.828658] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.236 [2024-10-01 14:33:02.828725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.236 [2024-10-01 14:33:02.836721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.236 [2024-10-01 14:33:02.838572] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.236 [2024-10-01 14:33:02.838615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.236 [2024-10-01 14:33:02.838625] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.236 [2024-10-01 14:33:02.838636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.236 [2024-10-01 14:33:02.838643] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:11.236 [2024-10-01 14:33:02.838651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.236 "name": "Existed_Raid", 00:08:11.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.236 "strip_size_kb": 64, 00:08:11.236 "state": "configuring", 00:08:11.236 "raid_level": "raid0", 00:08:11.236 "superblock": false, 00:08:11.236 "num_base_bdevs": 4, 00:08:11.236 "num_base_bdevs_discovered": 1, 00:08:11.236 "num_base_bdevs_operational": 4, 00:08:11.236 "base_bdevs_list": [ 00:08:11.236 { 00:08:11.236 "name": "BaseBdev1", 00:08:11.236 "uuid": "d271d082-3981-4ae3-bc71-2a49034c0af9", 00:08:11.236 "is_configured": true, 00:08:11.236 "data_offset": 0, 00:08:11.236 "data_size": 65536 00:08:11.236 }, 00:08:11.236 { 00:08:11.236 "name": "BaseBdev2", 00:08:11.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.236 "is_configured": false, 00:08:11.236 "data_offset": 0, 00:08:11.236 "data_size": 0 00:08:11.236 }, 00:08:11.236 { 00:08:11.236 "name": "BaseBdev3", 00:08:11.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.236 "is_configured": false, 00:08:11.236 "data_offset": 0, 00:08:11.236 "data_size": 0 00:08:11.236 }, 00:08:11.236 { 00:08:11.236 "name": "BaseBdev4", 00:08:11.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.236 "is_configured": false, 00:08:11.236 "data_offset": 0, 00:08:11.236 "data_size": 0 00:08:11.236 } 00:08:11.236 ] 00:08:11.236 }' 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.236 14:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.498 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:11.498 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.498 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.761 [2024-10-01 14:33:03.187381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.761 BaseBdev2 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.761 [ 00:08:11.761 { 00:08:11.761 "name": "BaseBdev2", 00:08:11.761 "aliases": [ 00:08:11.761 "f40ecfad-0e46-4fdb-8f60-84f01537e1e3" 00:08:11.761 ], 00:08:11.761 "product_name": "Malloc disk", 00:08:11.761 "block_size": 512, 00:08:11.761 "num_blocks": 65536, 00:08:11.761 "uuid": "f40ecfad-0e46-4fdb-8f60-84f01537e1e3", 00:08:11.761 "assigned_rate_limits": { 00:08:11.761 "rw_ios_per_sec": 0, 00:08:11.761 "rw_mbytes_per_sec": 0, 00:08:11.761 "r_mbytes_per_sec": 0, 00:08:11.761 "w_mbytes_per_sec": 0 00:08:11.761 }, 00:08:11.761 "claimed": true, 00:08:11.761 "claim_type": "exclusive_write", 00:08:11.761 "zoned": false, 00:08:11.761 "supported_io_types": { 00:08:11.761 "read": true, 00:08:11.761 "write": true, 00:08:11.761 "unmap": true, 00:08:11.761 "flush": true, 00:08:11.761 "reset": true, 00:08:11.761 "nvme_admin": false, 00:08:11.761 "nvme_io": false, 00:08:11.761 "nvme_io_md": false, 00:08:11.761 "write_zeroes": true, 00:08:11.761 "zcopy": true, 00:08:11.761 "get_zone_info": false, 00:08:11.761 "zone_management": false, 00:08:11.761 "zone_append": false, 00:08:11.761 "compare": false, 00:08:11.761 "compare_and_write": false, 00:08:11.761 "abort": true, 00:08:11.761 "seek_hole": false, 00:08:11.761 "seek_data": false, 00:08:11.761 "copy": true, 00:08:11.761 "nvme_iov_md": false 00:08:11.761 }, 00:08:11.761 "memory_domains": [ 00:08:11.761 { 00:08:11.761 "dma_device_id": "system", 00:08:11.761 "dma_device_type": 1 00:08:11.761 }, 00:08:11.761 { 00:08:11.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.761 "dma_device_type": 2 00:08:11.761 } 00:08:11.761 ], 00:08:11.761 "driver_specific": {} 00:08:11.761 } 00:08:11.761 ] 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.761 "name": "Existed_Raid", 00:08:11.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.761 "strip_size_kb": 64, 00:08:11.761 "state": "configuring", 00:08:11.761 "raid_level": "raid0", 00:08:11.761 "superblock": false, 00:08:11.761 "num_base_bdevs": 4, 00:08:11.761 "num_base_bdevs_discovered": 2, 00:08:11.761 "num_base_bdevs_operational": 4, 00:08:11.761 "base_bdevs_list": [ 00:08:11.761 { 00:08:11.761 "name": "BaseBdev1", 00:08:11.761 "uuid": "d271d082-3981-4ae3-bc71-2a49034c0af9", 00:08:11.761 "is_configured": true, 00:08:11.761 "data_offset": 0, 00:08:11.761 "data_size": 65536 00:08:11.761 }, 00:08:11.761 { 00:08:11.761 "name": "BaseBdev2", 00:08:11.761 "uuid": "f40ecfad-0e46-4fdb-8f60-84f01537e1e3", 00:08:11.761 "is_configured": true, 00:08:11.761 "data_offset": 0, 00:08:11.761 "data_size": 65536 00:08:11.761 }, 00:08:11.761 { 00:08:11.761 "name": "BaseBdev3", 00:08:11.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.761 "is_configured": false, 00:08:11.761 "data_offset": 0, 00:08:11.761 "data_size": 0 00:08:11.761 }, 00:08:11.761 { 00:08:11.761 "name": "BaseBdev4", 00:08:11.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.761 "is_configured": false, 00:08:11.761 "data_offset": 0, 00:08:11.761 "data_size": 0 00:08:11.761 } 00:08:11.761 ] 00:08:11.761 }' 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.761 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.023 [2024-10-01 14:33:03.546548] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:12.023 BaseBdev3 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.023 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.023 [ 00:08:12.023 { 00:08:12.023 "name": "BaseBdev3", 00:08:12.023 "aliases": [ 00:08:12.023 "d27515ca-97cf-4fc8-868b-cd0818f35c18" 00:08:12.024 ], 00:08:12.024 "product_name": "Malloc disk", 00:08:12.024 "block_size": 512, 00:08:12.024 "num_blocks": 65536, 00:08:12.024 "uuid": "d27515ca-97cf-4fc8-868b-cd0818f35c18", 00:08:12.024 "assigned_rate_limits": { 00:08:12.024 "rw_ios_per_sec": 0, 00:08:12.024 "rw_mbytes_per_sec": 0, 00:08:12.024 "r_mbytes_per_sec": 0, 00:08:12.024 "w_mbytes_per_sec": 0 00:08:12.024 }, 00:08:12.024 "claimed": true, 00:08:12.024 "claim_type": "exclusive_write", 00:08:12.024 "zoned": false, 00:08:12.024 "supported_io_types": { 00:08:12.024 "read": true, 00:08:12.024 "write": true, 00:08:12.024 "unmap": true, 00:08:12.024 "flush": true, 00:08:12.024 "reset": true, 00:08:12.024 "nvme_admin": false, 00:08:12.024 "nvme_io": false, 00:08:12.024 "nvme_io_md": false, 00:08:12.024 "write_zeroes": true, 00:08:12.024 "zcopy": true, 00:08:12.024 "get_zone_info": false, 00:08:12.024 "zone_management": false, 00:08:12.024 "zone_append": false, 00:08:12.024 "compare": false, 00:08:12.024 "compare_and_write": false, 00:08:12.024 "abort": true, 00:08:12.024 "seek_hole": false, 00:08:12.024 "seek_data": false, 00:08:12.024 "copy": true, 00:08:12.024 "nvme_iov_md": false 00:08:12.024 }, 00:08:12.024 "memory_domains": [ 00:08:12.024 { 00:08:12.024 "dma_device_id": "system", 00:08:12.024 "dma_device_type": 1 00:08:12.024 }, 00:08:12.024 { 00:08:12.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.024 "dma_device_type": 2 00:08:12.024 } 00:08:12.024 ], 00:08:12.024 "driver_specific": {} 00:08:12.024 } 00:08:12.024 ] 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.024 "name": "Existed_Raid", 00:08:12.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.024 "strip_size_kb": 64, 00:08:12.024 "state": "configuring", 00:08:12.024 "raid_level": "raid0", 00:08:12.024 "superblock": false, 00:08:12.024 "num_base_bdevs": 4, 00:08:12.024 "num_base_bdevs_discovered": 3, 00:08:12.024 "num_base_bdevs_operational": 4, 00:08:12.024 "base_bdevs_list": [ 00:08:12.024 { 00:08:12.024 "name": "BaseBdev1", 00:08:12.024 "uuid": "d271d082-3981-4ae3-bc71-2a49034c0af9", 00:08:12.024 "is_configured": true, 00:08:12.024 "data_offset": 0, 00:08:12.024 "data_size": 65536 00:08:12.024 }, 00:08:12.024 { 00:08:12.024 "name": "BaseBdev2", 00:08:12.024 "uuid": "f40ecfad-0e46-4fdb-8f60-84f01537e1e3", 00:08:12.024 "is_configured": true, 00:08:12.024 "data_offset": 0, 00:08:12.024 "data_size": 65536 00:08:12.024 }, 00:08:12.024 { 00:08:12.024 "name": "BaseBdev3", 00:08:12.024 "uuid": "d27515ca-97cf-4fc8-868b-cd0818f35c18", 00:08:12.024 "is_configured": true, 00:08:12.024 "data_offset": 0, 00:08:12.024 "data_size": 65536 00:08:12.024 }, 00:08:12.024 { 00:08:12.024 "name": "BaseBdev4", 00:08:12.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.024 "is_configured": false, 00:08:12.024 "data_offset": 0, 00:08:12.024 "data_size": 0 00:08:12.024 } 00:08:12.024 ] 00:08:12.024 }' 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.024 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.286 [2024-10-01 14:33:03.905097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:12.286 [2024-10-01 14:33:03.905135] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:12.286 [2024-10-01 14:33:03.905144] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:12.286 [2024-10-01 14:33:03.905425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:12.286 [2024-10-01 14:33:03.905568] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:12.286 [2024-10-01 14:33:03.905581] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:12.286 [2024-10-01 14:33:03.905831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.286 BaseBdev4 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:12.286 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.287 [ 00:08:12.287 { 00:08:12.287 "name": "BaseBdev4", 00:08:12.287 "aliases": [ 00:08:12.287 "63359fe6-da3c-4780-8d28-e4fab2bb3a0f" 00:08:12.287 ], 00:08:12.287 "product_name": "Malloc disk", 00:08:12.287 "block_size": 512, 00:08:12.287 "num_blocks": 65536, 00:08:12.287 "uuid": "63359fe6-da3c-4780-8d28-e4fab2bb3a0f", 00:08:12.287 "assigned_rate_limits": { 00:08:12.287 "rw_ios_per_sec": 0, 00:08:12.287 "rw_mbytes_per_sec": 0, 00:08:12.287 "r_mbytes_per_sec": 0, 00:08:12.287 "w_mbytes_per_sec": 0 00:08:12.287 }, 00:08:12.287 "claimed": true, 00:08:12.287 "claim_type": "exclusive_write", 00:08:12.287 "zoned": false, 00:08:12.287 "supported_io_types": { 00:08:12.287 "read": true, 00:08:12.287 "write": true, 00:08:12.287 "unmap": true, 00:08:12.287 "flush": true, 00:08:12.287 "reset": true, 00:08:12.287 "nvme_admin": false, 00:08:12.287 "nvme_io": false, 00:08:12.287 "nvme_io_md": false, 00:08:12.287 "write_zeroes": true, 00:08:12.287 "zcopy": true, 00:08:12.287 "get_zone_info": false, 00:08:12.287 "zone_management": false, 00:08:12.287 "zone_append": false, 00:08:12.287 "compare": false, 00:08:12.287 "compare_and_write": false, 00:08:12.287 "abort": true, 00:08:12.287 "seek_hole": false, 00:08:12.287 "seek_data": false, 00:08:12.287 "copy": true, 00:08:12.287 "nvme_iov_md": false 00:08:12.287 }, 00:08:12.287 "memory_domains": [ 00:08:12.287 { 00:08:12.287 "dma_device_id": "system", 00:08:12.287 "dma_device_type": 1 00:08:12.287 }, 00:08:12.287 { 00:08:12.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.287 "dma_device_type": 2 00:08:12.287 } 00:08:12.287 ], 00:08:12.287 "driver_specific": {} 00:08:12.287 } 00:08:12.287 ] 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.287 "name": "Existed_Raid", 00:08:12.287 "uuid": "347a8af9-e23b-48bc-b307-796b69ed8dd9", 00:08:12.287 "strip_size_kb": 64, 00:08:12.287 "state": "online", 00:08:12.287 "raid_level": "raid0", 00:08:12.287 "superblock": false, 00:08:12.287 "num_base_bdevs": 4, 00:08:12.287 "num_base_bdevs_discovered": 4, 00:08:12.287 "num_base_bdevs_operational": 4, 00:08:12.287 "base_bdevs_list": [ 00:08:12.287 { 00:08:12.287 "name": "BaseBdev1", 00:08:12.287 "uuid": "d271d082-3981-4ae3-bc71-2a49034c0af9", 00:08:12.287 "is_configured": true, 00:08:12.287 "data_offset": 0, 00:08:12.287 "data_size": 65536 00:08:12.287 }, 00:08:12.287 { 00:08:12.287 "name": "BaseBdev2", 00:08:12.287 "uuid": "f40ecfad-0e46-4fdb-8f60-84f01537e1e3", 00:08:12.287 "is_configured": true, 00:08:12.287 "data_offset": 0, 00:08:12.287 "data_size": 65536 00:08:12.287 }, 00:08:12.287 { 00:08:12.287 "name": "BaseBdev3", 00:08:12.287 "uuid": "d27515ca-97cf-4fc8-868b-cd0818f35c18", 00:08:12.287 "is_configured": true, 00:08:12.287 "data_offset": 0, 00:08:12.287 "data_size": 65536 00:08:12.287 }, 00:08:12.287 { 00:08:12.287 "name": "BaseBdev4", 00:08:12.287 "uuid": "63359fe6-da3c-4780-8d28-e4fab2bb3a0f", 00:08:12.287 "is_configured": true, 00:08:12.287 "data_offset": 0, 00:08:12.287 "data_size": 65536 00:08:12.287 } 00:08:12.287 ] 00:08:12.287 }' 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.287 14:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.859 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:12.859 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:12.859 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.859 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.859 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.859 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.859 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:12.859 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.859 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.859 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.859 [2024-10-01 14:33:04.261623] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.859 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.859 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.859 "name": "Existed_Raid", 00:08:12.859 "aliases": [ 00:08:12.859 "347a8af9-e23b-48bc-b307-796b69ed8dd9" 00:08:12.859 ], 00:08:12.859 "product_name": "Raid Volume", 00:08:12.859 "block_size": 512, 00:08:12.859 "num_blocks": 262144, 00:08:12.859 "uuid": "347a8af9-e23b-48bc-b307-796b69ed8dd9", 00:08:12.859 "assigned_rate_limits": { 00:08:12.859 "rw_ios_per_sec": 0, 00:08:12.859 "rw_mbytes_per_sec": 0, 00:08:12.859 "r_mbytes_per_sec": 0, 00:08:12.859 "w_mbytes_per_sec": 0 00:08:12.859 }, 00:08:12.859 "claimed": false, 00:08:12.859 "zoned": false, 00:08:12.859 "supported_io_types": { 00:08:12.859 "read": true, 00:08:12.859 "write": true, 00:08:12.859 "unmap": true, 00:08:12.859 "flush": true, 00:08:12.859 "reset": true, 00:08:12.859 "nvme_admin": false, 00:08:12.859 "nvme_io": false, 00:08:12.859 "nvme_io_md": false, 00:08:12.859 "write_zeroes": true, 00:08:12.859 "zcopy": false, 00:08:12.859 "get_zone_info": false, 00:08:12.859 "zone_management": false, 00:08:12.859 "zone_append": false, 00:08:12.859 "compare": false, 00:08:12.859 "compare_and_write": false, 00:08:12.859 "abort": false, 00:08:12.859 "seek_hole": false, 00:08:12.859 "seek_data": false, 00:08:12.859 "copy": false, 00:08:12.859 "nvme_iov_md": false 00:08:12.859 }, 00:08:12.859 "memory_domains": [ 00:08:12.859 { 00:08:12.859 "dma_device_id": "system", 00:08:12.859 "dma_device_type": 1 00:08:12.859 }, 00:08:12.859 { 00:08:12.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.859 "dma_device_type": 2 00:08:12.859 }, 00:08:12.859 { 00:08:12.859 "dma_device_id": "system", 00:08:12.859 "dma_device_type": 1 00:08:12.859 }, 00:08:12.859 { 00:08:12.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.859 "dma_device_type": 2 00:08:12.859 }, 00:08:12.859 { 00:08:12.859 "dma_device_id": "system", 00:08:12.859 "dma_device_type": 1 00:08:12.859 }, 00:08:12.859 { 00:08:12.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.859 "dma_device_type": 2 00:08:12.859 }, 00:08:12.859 { 00:08:12.859 "dma_device_id": "system", 00:08:12.859 "dma_device_type": 1 00:08:12.859 }, 00:08:12.859 { 00:08:12.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.859 "dma_device_type": 2 00:08:12.859 } 00:08:12.859 ], 00:08:12.859 "driver_specific": { 00:08:12.859 "raid": { 00:08:12.859 "uuid": "347a8af9-e23b-48bc-b307-796b69ed8dd9", 00:08:12.860 "strip_size_kb": 64, 00:08:12.860 "state": "online", 00:08:12.860 "raid_level": "raid0", 00:08:12.860 "superblock": false, 00:08:12.860 "num_base_bdevs": 4, 00:08:12.860 "num_base_bdevs_discovered": 4, 00:08:12.860 "num_base_bdevs_operational": 4, 00:08:12.860 "base_bdevs_list": [ 00:08:12.860 { 00:08:12.860 "name": "BaseBdev1", 00:08:12.860 "uuid": "d271d082-3981-4ae3-bc71-2a49034c0af9", 00:08:12.860 "is_configured": true, 00:08:12.860 "data_offset": 0, 00:08:12.860 "data_size": 65536 00:08:12.860 }, 00:08:12.860 { 00:08:12.860 "name": "BaseBdev2", 00:08:12.860 "uuid": "f40ecfad-0e46-4fdb-8f60-84f01537e1e3", 00:08:12.860 "is_configured": true, 00:08:12.860 "data_offset": 0, 00:08:12.860 "data_size": 65536 00:08:12.860 }, 00:08:12.860 { 00:08:12.860 "name": "BaseBdev3", 00:08:12.860 "uuid": "d27515ca-97cf-4fc8-868b-cd0818f35c18", 00:08:12.860 "is_configured": true, 00:08:12.860 "data_offset": 0, 00:08:12.860 "data_size": 65536 00:08:12.860 }, 00:08:12.860 { 00:08:12.860 "name": "BaseBdev4", 00:08:12.860 "uuid": "63359fe6-da3c-4780-8d28-e4fab2bb3a0f", 00:08:12.860 "is_configured": true, 00:08:12.860 "data_offset": 0, 00:08:12.860 "data_size": 65536 00:08:12.860 } 00:08:12.860 ] 00:08:12.860 } 00:08:12.860 } 00:08:12.860 }' 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:12.860 BaseBdev2 00:08:12.860 BaseBdev3 00:08:12.860 BaseBdev4' 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.860 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.860 [2024-10-01 14:33:04.485353] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.860 [2024-10-01 14:33:04.485393] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.860 [2024-10-01 14:33:04.485450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.123 "name": "Existed_Raid", 00:08:13.123 "uuid": "347a8af9-e23b-48bc-b307-796b69ed8dd9", 00:08:13.123 "strip_size_kb": 64, 00:08:13.123 "state": "offline", 00:08:13.123 "raid_level": "raid0", 00:08:13.123 "superblock": false, 00:08:13.123 "num_base_bdevs": 4, 00:08:13.123 "num_base_bdevs_discovered": 3, 00:08:13.123 "num_base_bdevs_operational": 3, 00:08:13.123 "base_bdevs_list": [ 00:08:13.123 { 00:08:13.123 "name": null, 00:08:13.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.123 "is_configured": false, 00:08:13.123 "data_offset": 0, 00:08:13.123 "data_size": 65536 00:08:13.123 }, 00:08:13.123 { 00:08:13.123 "name": "BaseBdev2", 00:08:13.123 "uuid": "f40ecfad-0e46-4fdb-8f60-84f01537e1e3", 00:08:13.123 "is_configured": true, 00:08:13.123 "data_offset": 0, 00:08:13.123 "data_size": 65536 00:08:13.123 }, 00:08:13.123 { 00:08:13.123 "name": "BaseBdev3", 00:08:13.123 "uuid": "d27515ca-97cf-4fc8-868b-cd0818f35c18", 00:08:13.123 "is_configured": true, 00:08:13.123 "data_offset": 0, 00:08:13.123 "data_size": 65536 00:08:13.123 }, 00:08:13.123 { 00:08:13.123 "name": "BaseBdev4", 00:08:13.123 "uuid": "63359fe6-da3c-4780-8d28-e4fab2bb3a0f", 00:08:13.123 "is_configured": true, 00:08:13.123 "data_offset": 0, 00:08:13.123 "data_size": 65536 00:08:13.123 } 00:08:13.123 ] 00:08:13.123 }' 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.123 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.385 [2024-10-01 14:33:04.908723] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.385 14:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.385 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.385 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.385 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:13.385 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.385 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.385 [2024-10-01 14:33:05.011961] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.655 [2024-10-01 14:33:05.110781] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:13.655 [2024-10-01 14:33:05.110906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.655 BaseBdev2 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.655 [ 00:08:13.655 { 00:08:13.655 "name": "BaseBdev2", 00:08:13.655 "aliases": [ 00:08:13.655 "710f64cc-b1de-482e-bb2d-99cd15302075" 00:08:13.655 ], 00:08:13.655 "product_name": "Malloc disk", 00:08:13.655 "block_size": 512, 00:08:13.655 "num_blocks": 65536, 00:08:13.655 "uuid": "710f64cc-b1de-482e-bb2d-99cd15302075", 00:08:13.655 "assigned_rate_limits": { 00:08:13.655 "rw_ios_per_sec": 0, 00:08:13.655 "rw_mbytes_per_sec": 0, 00:08:13.655 "r_mbytes_per_sec": 0, 00:08:13.655 "w_mbytes_per_sec": 0 00:08:13.655 }, 00:08:13.655 "claimed": false, 00:08:13.655 "zoned": false, 00:08:13.655 "supported_io_types": { 00:08:13.655 "read": true, 00:08:13.655 "write": true, 00:08:13.655 "unmap": true, 00:08:13.655 "flush": true, 00:08:13.655 "reset": true, 00:08:13.655 "nvme_admin": false, 00:08:13.655 "nvme_io": false, 00:08:13.655 "nvme_io_md": false, 00:08:13.655 "write_zeroes": true, 00:08:13.655 "zcopy": true, 00:08:13.655 "get_zone_info": false, 00:08:13.655 "zone_management": false, 00:08:13.655 "zone_append": false, 00:08:13.655 "compare": false, 00:08:13.655 "compare_and_write": false, 00:08:13.655 "abort": true, 00:08:13.655 "seek_hole": false, 00:08:13.655 "seek_data": false, 00:08:13.655 "copy": true, 00:08:13.655 "nvme_iov_md": false 00:08:13.655 }, 00:08:13.655 "memory_domains": [ 00:08:13.655 { 00:08:13.655 "dma_device_id": "system", 00:08:13.655 "dma_device_type": 1 00:08:13.655 }, 00:08:13.655 { 00:08:13.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.655 "dma_device_type": 2 00:08:13.655 } 00:08:13.655 ], 00:08:13.655 "driver_specific": {} 00:08:13.655 } 00:08:13.655 ] 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.655 BaseBdev3 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.655 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.656 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.656 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:13.656 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.656 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.656 [ 00:08:13.656 { 00:08:13.656 "name": "BaseBdev3", 00:08:13.656 "aliases": [ 00:08:13.656 "72b8d6f7-81be-4254-a9e5-ef0e6427dbef" 00:08:13.656 ], 00:08:13.656 "product_name": "Malloc disk", 00:08:13.656 "block_size": 512, 00:08:13.656 "num_blocks": 65536, 00:08:13.656 "uuid": "72b8d6f7-81be-4254-a9e5-ef0e6427dbef", 00:08:13.656 "assigned_rate_limits": { 00:08:13.656 "rw_ios_per_sec": 0, 00:08:13.656 "rw_mbytes_per_sec": 0, 00:08:13.656 "r_mbytes_per_sec": 0, 00:08:13.656 "w_mbytes_per_sec": 0 00:08:13.656 }, 00:08:13.656 "claimed": false, 00:08:13.656 "zoned": false, 00:08:13.656 "supported_io_types": { 00:08:13.656 "read": true, 00:08:13.656 "write": true, 00:08:13.656 "unmap": true, 00:08:13.656 "flush": true, 00:08:13.656 "reset": true, 00:08:13.656 "nvme_admin": false, 00:08:13.656 "nvme_io": false, 00:08:13.656 "nvme_io_md": false, 00:08:13.656 "write_zeroes": true, 00:08:13.656 "zcopy": true, 00:08:13.656 "get_zone_info": false, 00:08:13.656 "zone_management": false, 00:08:13.656 "zone_append": false, 00:08:13.656 "compare": false, 00:08:13.656 "compare_and_write": false, 00:08:13.656 "abort": true, 00:08:13.656 "seek_hole": false, 00:08:13.656 "seek_data": false, 00:08:13.656 "copy": true, 00:08:13.656 "nvme_iov_md": false 00:08:13.656 }, 00:08:13.656 "memory_domains": [ 00:08:13.656 { 00:08:13.656 "dma_device_id": "system", 00:08:13.656 "dma_device_type": 1 00:08:13.656 }, 00:08:13.656 { 00:08:13.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.656 "dma_device_type": 2 00:08:13.656 } 00:08:13.656 ], 00:08:13.656 "driver_specific": {} 00:08:13.656 } 00:08:13.656 ] 00:08:13.656 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.656 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:13.656 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.656 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.656 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:13.656 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.656 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.929 BaseBdev4 00:08:13.929 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.929 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:13.929 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:08:13.929 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:13.929 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:13.929 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:13.929 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:13.929 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:13.929 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.929 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.929 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.929 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:13.929 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.930 [ 00:08:13.930 { 00:08:13.930 "name": "BaseBdev4", 00:08:13.930 "aliases": [ 00:08:13.930 "b9551469-1b65-40f0-92d1-66601ccb6091" 00:08:13.930 ], 00:08:13.930 "product_name": "Malloc disk", 00:08:13.930 "block_size": 512, 00:08:13.930 "num_blocks": 65536, 00:08:13.930 "uuid": "b9551469-1b65-40f0-92d1-66601ccb6091", 00:08:13.930 "assigned_rate_limits": { 00:08:13.930 "rw_ios_per_sec": 0, 00:08:13.930 "rw_mbytes_per_sec": 0, 00:08:13.930 "r_mbytes_per_sec": 0, 00:08:13.930 "w_mbytes_per_sec": 0 00:08:13.930 }, 00:08:13.930 "claimed": false, 00:08:13.930 "zoned": false, 00:08:13.930 "supported_io_types": { 00:08:13.930 "read": true, 00:08:13.930 "write": true, 00:08:13.930 "unmap": true, 00:08:13.930 "flush": true, 00:08:13.930 "reset": true, 00:08:13.930 "nvme_admin": false, 00:08:13.930 "nvme_io": false, 00:08:13.930 "nvme_io_md": false, 00:08:13.930 "write_zeroes": true, 00:08:13.930 "zcopy": true, 00:08:13.930 "get_zone_info": false, 00:08:13.930 "zone_management": false, 00:08:13.930 "zone_append": false, 00:08:13.930 "compare": false, 00:08:13.930 "compare_and_write": false, 00:08:13.930 "abort": true, 00:08:13.930 "seek_hole": false, 00:08:13.930 "seek_data": false, 00:08:13.930 "copy": true, 00:08:13.930 "nvme_iov_md": false 00:08:13.930 }, 00:08:13.930 "memory_domains": [ 00:08:13.930 { 00:08:13.930 "dma_device_id": "system", 00:08:13.930 "dma_device_type": 1 00:08:13.930 }, 00:08:13.930 { 00:08:13.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.930 "dma_device_type": 2 00:08:13.930 } 00:08:13.930 ], 00:08:13.930 "driver_specific": {} 00:08:13.930 } 00:08:13.930 ] 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.930 [2024-10-01 14:33:05.374758] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.930 [2024-10-01 14:33:05.374887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.930 [2024-10-01 14:33:05.374953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.930 [2024-10-01 14:33:05.376797] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:13.930 [2024-10-01 14:33:05.376921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.930 "name": "Existed_Raid", 00:08:13.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.930 "strip_size_kb": 64, 00:08:13.930 "state": "configuring", 00:08:13.930 "raid_level": "raid0", 00:08:13.930 "superblock": false, 00:08:13.930 "num_base_bdevs": 4, 00:08:13.930 "num_base_bdevs_discovered": 3, 00:08:13.930 "num_base_bdevs_operational": 4, 00:08:13.930 "base_bdevs_list": [ 00:08:13.930 { 00:08:13.930 "name": "BaseBdev1", 00:08:13.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.930 "is_configured": false, 00:08:13.930 "data_offset": 0, 00:08:13.930 "data_size": 0 00:08:13.930 }, 00:08:13.930 { 00:08:13.930 "name": "BaseBdev2", 00:08:13.930 "uuid": "710f64cc-b1de-482e-bb2d-99cd15302075", 00:08:13.930 "is_configured": true, 00:08:13.930 "data_offset": 0, 00:08:13.930 "data_size": 65536 00:08:13.930 }, 00:08:13.930 { 00:08:13.930 "name": "BaseBdev3", 00:08:13.930 "uuid": "72b8d6f7-81be-4254-a9e5-ef0e6427dbef", 00:08:13.930 "is_configured": true, 00:08:13.930 "data_offset": 0, 00:08:13.930 "data_size": 65536 00:08:13.930 }, 00:08:13.930 { 00:08:13.930 "name": "BaseBdev4", 00:08:13.930 "uuid": "b9551469-1b65-40f0-92d1-66601ccb6091", 00:08:13.930 "is_configured": true, 00:08:13.930 "data_offset": 0, 00:08:13.930 "data_size": 65536 00:08:13.930 } 00:08:13.930 ] 00:08:13.930 }' 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.930 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.192 [2024-10-01 14:33:05.710841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.192 "name": "Existed_Raid", 00:08:14.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.192 "strip_size_kb": 64, 00:08:14.192 "state": "configuring", 00:08:14.192 "raid_level": "raid0", 00:08:14.192 "superblock": false, 00:08:14.192 "num_base_bdevs": 4, 00:08:14.192 "num_base_bdevs_discovered": 2, 00:08:14.192 "num_base_bdevs_operational": 4, 00:08:14.192 "base_bdevs_list": [ 00:08:14.192 { 00:08:14.192 "name": "BaseBdev1", 00:08:14.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.192 "is_configured": false, 00:08:14.192 "data_offset": 0, 00:08:14.192 "data_size": 0 00:08:14.192 }, 00:08:14.192 { 00:08:14.192 "name": null, 00:08:14.192 "uuid": "710f64cc-b1de-482e-bb2d-99cd15302075", 00:08:14.192 "is_configured": false, 00:08:14.192 "data_offset": 0, 00:08:14.192 "data_size": 65536 00:08:14.192 }, 00:08:14.192 { 00:08:14.192 "name": "BaseBdev3", 00:08:14.192 "uuid": "72b8d6f7-81be-4254-a9e5-ef0e6427dbef", 00:08:14.192 "is_configured": true, 00:08:14.192 "data_offset": 0, 00:08:14.192 "data_size": 65536 00:08:14.192 }, 00:08:14.192 { 00:08:14.192 "name": "BaseBdev4", 00:08:14.192 "uuid": "b9551469-1b65-40f0-92d1-66601ccb6091", 00:08:14.192 "is_configured": true, 00:08:14.192 "data_offset": 0, 00:08:14.192 "data_size": 65536 00:08:14.192 } 00:08:14.192 ] 00:08:14.192 }' 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.192 14:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.452 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.452 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.452 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.452 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:14.452 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.452 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:14.452 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.452 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.452 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.452 [2024-10-01 14:33:06.077569] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.452 BaseBdev1 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.453 [ 00:08:14.453 { 00:08:14.453 "name": "BaseBdev1", 00:08:14.453 "aliases": [ 00:08:14.453 "3038d264-4bf1-41a6-a63e-24247ad9425d" 00:08:14.453 ], 00:08:14.453 "product_name": "Malloc disk", 00:08:14.453 "block_size": 512, 00:08:14.453 "num_blocks": 65536, 00:08:14.453 "uuid": "3038d264-4bf1-41a6-a63e-24247ad9425d", 00:08:14.453 "assigned_rate_limits": { 00:08:14.453 "rw_ios_per_sec": 0, 00:08:14.453 "rw_mbytes_per_sec": 0, 00:08:14.453 "r_mbytes_per_sec": 0, 00:08:14.453 "w_mbytes_per_sec": 0 00:08:14.453 }, 00:08:14.453 "claimed": true, 00:08:14.453 "claim_type": "exclusive_write", 00:08:14.453 "zoned": false, 00:08:14.453 "supported_io_types": { 00:08:14.453 "read": true, 00:08:14.453 "write": true, 00:08:14.453 "unmap": true, 00:08:14.453 "flush": true, 00:08:14.453 "reset": true, 00:08:14.453 "nvme_admin": false, 00:08:14.453 "nvme_io": false, 00:08:14.453 "nvme_io_md": false, 00:08:14.453 "write_zeroes": true, 00:08:14.453 "zcopy": true, 00:08:14.453 "get_zone_info": false, 00:08:14.453 "zone_management": false, 00:08:14.453 "zone_append": false, 00:08:14.453 "compare": false, 00:08:14.453 "compare_and_write": false, 00:08:14.453 "abort": true, 00:08:14.453 "seek_hole": false, 00:08:14.453 "seek_data": false, 00:08:14.453 "copy": true, 00:08:14.453 "nvme_iov_md": false 00:08:14.453 }, 00:08:14.453 "memory_domains": [ 00:08:14.453 { 00:08:14.453 "dma_device_id": "system", 00:08:14.453 "dma_device_type": 1 00:08:14.453 }, 00:08:14.453 { 00:08:14.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.453 "dma_device_type": 2 00:08:14.453 } 00:08:14.453 ], 00:08:14.453 "driver_specific": {} 00:08:14.453 } 00:08:14.453 ] 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.453 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.714 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.714 "name": "Existed_Raid", 00:08:14.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.714 "strip_size_kb": 64, 00:08:14.714 "state": "configuring", 00:08:14.714 "raid_level": "raid0", 00:08:14.714 "superblock": false, 00:08:14.714 "num_base_bdevs": 4, 00:08:14.714 "num_base_bdevs_discovered": 3, 00:08:14.714 "num_base_bdevs_operational": 4, 00:08:14.714 "base_bdevs_list": [ 00:08:14.714 { 00:08:14.714 "name": "BaseBdev1", 00:08:14.714 "uuid": "3038d264-4bf1-41a6-a63e-24247ad9425d", 00:08:14.714 "is_configured": true, 00:08:14.714 "data_offset": 0, 00:08:14.714 "data_size": 65536 00:08:14.714 }, 00:08:14.714 { 00:08:14.714 "name": null, 00:08:14.714 "uuid": "710f64cc-b1de-482e-bb2d-99cd15302075", 00:08:14.714 "is_configured": false, 00:08:14.714 "data_offset": 0, 00:08:14.714 "data_size": 65536 00:08:14.714 }, 00:08:14.714 { 00:08:14.714 "name": "BaseBdev3", 00:08:14.714 "uuid": "72b8d6f7-81be-4254-a9e5-ef0e6427dbef", 00:08:14.714 "is_configured": true, 00:08:14.714 "data_offset": 0, 00:08:14.714 "data_size": 65536 00:08:14.714 }, 00:08:14.714 { 00:08:14.714 "name": "BaseBdev4", 00:08:14.714 "uuid": "b9551469-1b65-40f0-92d1-66601ccb6091", 00:08:14.714 "is_configured": true, 00:08:14.714 "data_offset": 0, 00:08:14.714 "data_size": 65536 00:08:14.714 } 00:08:14.714 ] 00:08:14.714 }' 00:08:14.714 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.714 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.984 [2024-10-01 14:33:06.453733] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.984 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.984 "name": "Existed_Raid", 00:08:14.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.984 "strip_size_kb": 64, 00:08:14.984 "state": "configuring", 00:08:14.984 "raid_level": "raid0", 00:08:14.984 "superblock": false, 00:08:14.984 "num_base_bdevs": 4, 00:08:14.984 "num_base_bdevs_discovered": 2, 00:08:14.984 "num_base_bdevs_operational": 4, 00:08:14.984 "base_bdevs_list": [ 00:08:14.984 { 00:08:14.984 "name": "BaseBdev1", 00:08:14.984 "uuid": "3038d264-4bf1-41a6-a63e-24247ad9425d", 00:08:14.984 "is_configured": true, 00:08:14.984 "data_offset": 0, 00:08:14.984 "data_size": 65536 00:08:14.984 }, 00:08:14.984 { 00:08:14.984 "name": null, 00:08:14.985 "uuid": "710f64cc-b1de-482e-bb2d-99cd15302075", 00:08:14.985 "is_configured": false, 00:08:14.985 "data_offset": 0, 00:08:14.985 "data_size": 65536 00:08:14.985 }, 00:08:14.985 { 00:08:14.985 "name": null, 00:08:14.985 "uuid": "72b8d6f7-81be-4254-a9e5-ef0e6427dbef", 00:08:14.985 "is_configured": false, 00:08:14.985 "data_offset": 0, 00:08:14.985 "data_size": 65536 00:08:14.985 }, 00:08:14.985 { 00:08:14.985 "name": "BaseBdev4", 00:08:14.985 "uuid": "b9551469-1b65-40f0-92d1-66601ccb6091", 00:08:14.985 "is_configured": true, 00:08:14.985 "data_offset": 0, 00:08:14.985 "data_size": 65536 00:08:14.985 } 00:08:14.985 ] 00:08:14.985 }' 00:08:14.985 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.985 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.243 [2024-10-01 14:33:06.841849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.243 "name": "Existed_Raid", 00:08:15.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.243 "strip_size_kb": 64, 00:08:15.243 "state": "configuring", 00:08:15.243 "raid_level": "raid0", 00:08:15.243 "superblock": false, 00:08:15.243 "num_base_bdevs": 4, 00:08:15.243 "num_base_bdevs_discovered": 3, 00:08:15.243 "num_base_bdevs_operational": 4, 00:08:15.243 "base_bdevs_list": [ 00:08:15.243 { 00:08:15.243 "name": "BaseBdev1", 00:08:15.243 "uuid": "3038d264-4bf1-41a6-a63e-24247ad9425d", 00:08:15.243 "is_configured": true, 00:08:15.243 "data_offset": 0, 00:08:15.243 "data_size": 65536 00:08:15.243 }, 00:08:15.243 { 00:08:15.243 "name": null, 00:08:15.243 "uuid": "710f64cc-b1de-482e-bb2d-99cd15302075", 00:08:15.243 "is_configured": false, 00:08:15.243 "data_offset": 0, 00:08:15.243 "data_size": 65536 00:08:15.243 }, 00:08:15.243 { 00:08:15.243 "name": "BaseBdev3", 00:08:15.243 "uuid": "72b8d6f7-81be-4254-a9e5-ef0e6427dbef", 00:08:15.243 "is_configured": true, 00:08:15.243 "data_offset": 0, 00:08:15.243 "data_size": 65536 00:08:15.243 }, 00:08:15.243 { 00:08:15.243 "name": "BaseBdev4", 00:08:15.243 "uuid": "b9551469-1b65-40f0-92d1-66601ccb6091", 00:08:15.243 "is_configured": true, 00:08:15.243 "data_offset": 0, 00:08:15.243 "data_size": 65536 00:08:15.243 } 00:08:15.243 ] 00:08:15.243 }' 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.243 14:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.503 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.503 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.503 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.503 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.503 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.503 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:15.503 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.503 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.503 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.761 [2024-10-01 14:33:07.189928] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.761 "name": "Existed_Raid", 00:08:15.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.761 "strip_size_kb": 64, 00:08:15.761 "state": "configuring", 00:08:15.761 "raid_level": "raid0", 00:08:15.761 "superblock": false, 00:08:15.761 "num_base_bdevs": 4, 00:08:15.761 "num_base_bdevs_discovered": 2, 00:08:15.761 "num_base_bdevs_operational": 4, 00:08:15.761 "base_bdevs_list": [ 00:08:15.761 { 00:08:15.761 "name": null, 00:08:15.761 "uuid": "3038d264-4bf1-41a6-a63e-24247ad9425d", 00:08:15.761 "is_configured": false, 00:08:15.761 "data_offset": 0, 00:08:15.761 "data_size": 65536 00:08:15.761 }, 00:08:15.761 { 00:08:15.761 "name": null, 00:08:15.761 "uuid": "710f64cc-b1de-482e-bb2d-99cd15302075", 00:08:15.761 "is_configured": false, 00:08:15.761 "data_offset": 0, 00:08:15.761 "data_size": 65536 00:08:15.761 }, 00:08:15.761 { 00:08:15.761 "name": "BaseBdev3", 00:08:15.761 "uuid": "72b8d6f7-81be-4254-a9e5-ef0e6427dbef", 00:08:15.761 "is_configured": true, 00:08:15.761 "data_offset": 0, 00:08:15.761 "data_size": 65536 00:08:15.761 }, 00:08:15.761 { 00:08:15.761 "name": "BaseBdev4", 00:08:15.761 "uuid": "b9551469-1b65-40f0-92d1-66601ccb6091", 00:08:15.761 "is_configured": true, 00:08:15.761 "data_offset": 0, 00:08:15.761 "data_size": 65536 00:08:15.761 } 00:08:15.761 ] 00:08:15.761 }' 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.761 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.021 [2024-10-01 14:33:07.631895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.021 "name": "Existed_Raid", 00:08:16.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.021 "strip_size_kb": 64, 00:08:16.021 "state": "configuring", 00:08:16.021 "raid_level": "raid0", 00:08:16.021 "superblock": false, 00:08:16.021 "num_base_bdevs": 4, 00:08:16.021 "num_base_bdevs_discovered": 3, 00:08:16.021 "num_base_bdevs_operational": 4, 00:08:16.021 "base_bdevs_list": [ 00:08:16.021 { 00:08:16.021 "name": null, 00:08:16.021 "uuid": "3038d264-4bf1-41a6-a63e-24247ad9425d", 00:08:16.021 "is_configured": false, 00:08:16.021 "data_offset": 0, 00:08:16.021 "data_size": 65536 00:08:16.021 }, 00:08:16.021 { 00:08:16.021 "name": "BaseBdev2", 00:08:16.021 "uuid": "710f64cc-b1de-482e-bb2d-99cd15302075", 00:08:16.021 "is_configured": true, 00:08:16.021 "data_offset": 0, 00:08:16.021 "data_size": 65536 00:08:16.021 }, 00:08:16.021 { 00:08:16.021 "name": "BaseBdev3", 00:08:16.021 "uuid": "72b8d6f7-81be-4254-a9e5-ef0e6427dbef", 00:08:16.021 "is_configured": true, 00:08:16.021 "data_offset": 0, 00:08:16.021 "data_size": 65536 00:08:16.021 }, 00:08:16.021 { 00:08:16.021 "name": "BaseBdev4", 00:08:16.021 "uuid": "b9551469-1b65-40f0-92d1-66601ccb6091", 00:08:16.021 "is_configured": true, 00:08:16.021 "data_offset": 0, 00:08:16.021 "data_size": 65536 00:08:16.021 } 00:08:16.021 ] 00:08:16.021 }' 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.021 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.281 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.281 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:16.281 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.281 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.542 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.542 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:16.542 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:16.542 14:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.542 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.542 14:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3038d264-4bf1-41a6-a63e-24247ad9425d 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.542 [2024-10-01 14:33:08.042470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:16.542 [2024-10-01 14:33:08.042513] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:16.542 [2024-10-01 14:33:08.042520] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:16.542 [2024-10-01 14:33:08.042797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:16.542 [2024-10-01 14:33:08.042934] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:16.542 [2024-10-01 14:33:08.042944] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:16.542 NewBaseBdev 00:08:16.542 [2024-10-01 14:33:08.043156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.542 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.542 [ 00:08:16.542 { 00:08:16.542 "name": "NewBaseBdev", 00:08:16.542 "aliases": [ 00:08:16.542 "3038d264-4bf1-41a6-a63e-24247ad9425d" 00:08:16.542 ], 00:08:16.542 "product_name": "Malloc disk", 00:08:16.542 "block_size": 512, 00:08:16.542 "num_blocks": 65536, 00:08:16.542 "uuid": "3038d264-4bf1-41a6-a63e-24247ad9425d", 00:08:16.542 "assigned_rate_limits": { 00:08:16.542 "rw_ios_per_sec": 0, 00:08:16.542 "rw_mbytes_per_sec": 0, 00:08:16.542 "r_mbytes_per_sec": 0, 00:08:16.542 "w_mbytes_per_sec": 0 00:08:16.542 }, 00:08:16.542 "claimed": true, 00:08:16.542 "claim_type": "exclusive_write", 00:08:16.542 "zoned": false, 00:08:16.542 "supported_io_types": { 00:08:16.542 "read": true, 00:08:16.542 "write": true, 00:08:16.542 "unmap": true, 00:08:16.542 "flush": true, 00:08:16.543 "reset": true, 00:08:16.543 "nvme_admin": false, 00:08:16.543 "nvme_io": false, 00:08:16.543 "nvme_io_md": false, 00:08:16.543 "write_zeroes": true, 00:08:16.543 "zcopy": true, 00:08:16.543 "get_zone_info": false, 00:08:16.543 "zone_management": false, 00:08:16.543 "zone_append": false, 00:08:16.543 "compare": false, 00:08:16.543 "compare_and_write": false, 00:08:16.543 "abort": true, 00:08:16.543 "seek_hole": false, 00:08:16.543 "seek_data": false, 00:08:16.543 "copy": true, 00:08:16.543 "nvme_iov_md": false 00:08:16.543 }, 00:08:16.543 "memory_domains": [ 00:08:16.543 { 00:08:16.543 "dma_device_id": "system", 00:08:16.543 "dma_device_type": 1 00:08:16.543 }, 00:08:16.543 { 00:08:16.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.543 "dma_device_type": 2 00:08:16.543 } 00:08:16.543 ], 00:08:16.543 "driver_specific": {} 00:08:16.543 } 00:08:16.543 ] 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.543 "name": "Existed_Raid", 00:08:16.543 "uuid": "b1c252ae-f14f-4070-8306-49d0668b9e86", 00:08:16.543 "strip_size_kb": 64, 00:08:16.543 "state": "online", 00:08:16.543 "raid_level": "raid0", 00:08:16.543 "superblock": false, 00:08:16.543 "num_base_bdevs": 4, 00:08:16.543 "num_base_bdevs_discovered": 4, 00:08:16.543 "num_base_bdevs_operational": 4, 00:08:16.543 "base_bdevs_list": [ 00:08:16.543 { 00:08:16.543 "name": "NewBaseBdev", 00:08:16.543 "uuid": "3038d264-4bf1-41a6-a63e-24247ad9425d", 00:08:16.543 "is_configured": true, 00:08:16.543 "data_offset": 0, 00:08:16.543 "data_size": 65536 00:08:16.543 }, 00:08:16.543 { 00:08:16.543 "name": "BaseBdev2", 00:08:16.543 "uuid": "710f64cc-b1de-482e-bb2d-99cd15302075", 00:08:16.543 "is_configured": true, 00:08:16.543 "data_offset": 0, 00:08:16.543 "data_size": 65536 00:08:16.543 }, 00:08:16.543 { 00:08:16.543 "name": "BaseBdev3", 00:08:16.543 "uuid": "72b8d6f7-81be-4254-a9e5-ef0e6427dbef", 00:08:16.543 "is_configured": true, 00:08:16.543 "data_offset": 0, 00:08:16.543 "data_size": 65536 00:08:16.543 }, 00:08:16.543 { 00:08:16.543 "name": "BaseBdev4", 00:08:16.543 "uuid": "b9551469-1b65-40f0-92d1-66601ccb6091", 00:08:16.543 "is_configured": true, 00:08:16.543 "data_offset": 0, 00:08:16.543 "data_size": 65536 00:08:16.543 } 00:08:16.543 ] 00:08:16.543 }' 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.543 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.804 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:16.804 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:16.804 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.804 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.804 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.804 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.804 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:16.804 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.804 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.804 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.804 [2024-10-01 14:33:08.390978] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.804 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.804 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.804 "name": "Existed_Raid", 00:08:16.804 "aliases": [ 00:08:16.804 "b1c252ae-f14f-4070-8306-49d0668b9e86" 00:08:16.804 ], 00:08:16.804 "product_name": "Raid Volume", 00:08:16.804 "block_size": 512, 00:08:16.804 "num_blocks": 262144, 00:08:16.804 "uuid": "b1c252ae-f14f-4070-8306-49d0668b9e86", 00:08:16.804 "assigned_rate_limits": { 00:08:16.804 "rw_ios_per_sec": 0, 00:08:16.804 "rw_mbytes_per_sec": 0, 00:08:16.804 "r_mbytes_per_sec": 0, 00:08:16.804 "w_mbytes_per_sec": 0 00:08:16.804 }, 00:08:16.804 "claimed": false, 00:08:16.804 "zoned": false, 00:08:16.804 "supported_io_types": { 00:08:16.804 "read": true, 00:08:16.804 "write": true, 00:08:16.804 "unmap": true, 00:08:16.804 "flush": true, 00:08:16.804 "reset": true, 00:08:16.804 "nvme_admin": false, 00:08:16.804 "nvme_io": false, 00:08:16.804 "nvme_io_md": false, 00:08:16.804 "write_zeroes": true, 00:08:16.804 "zcopy": false, 00:08:16.804 "get_zone_info": false, 00:08:16.804 "zone_management": false, 00:08:16.804 "zone_append": false, 00:08:16.804 "compare": false, 00:08:16.804 "compare_and_write": false, 00:08:16.804 "abort": false, 00:08:16.804 "seek_hole": false, 00:08:16.804 "seek_data": false, 00:08:16.804 "copy": false, 00:08:16.804 "nvme_iov_md": false 00:08:16.804 }, 00:08:16.804 "memory_domains": [ 00:08:16.804 { 00:08:16.804 "dma_device_id": "system", 00:08:16.804 "dma_device_type": 1 00:08:16.804 }, 00:08:16.804 { 00:08:16.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.804 "dma_device_type": 2 00:08:16.804 }, 00:08:16.804 { 00:08:16.804 "dma_device_id": "system", 00:08:16.804 "dma_device_type": 1 00:08:16.804 }, 00:08:16.804 { 00:08:16.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.804 "dma_device_type": 2 00:08:16.804 }, 00:08:16.804 { 00:08:16.804 "dma_device_id": "system", 00:08:16.804 "dma_device_type": 1 00:08:16.804 }, 00:08:16.804 { 00:08:16.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.804 "dma_device_type": 2 00:08:16.804 }, 00:08:16.804 { 00:08:16.804 "dma_device_id": "system", 00:08:16.804 "dma_device_type": 1 00:08:16.804 }, 00:08:16.804 { 00:08:16.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.804 "dma_device_type": 2 00:08:16.804 } 00:08:16.804 ], 00:08:16.804 "driver_specific": { 00:08:16.804 "raid": { 00:08:16.804 "uuid": "b1c252ae-f14f-4070-8306-49d0668b9e86", 00:08:16.804 "strip_size_kb": 64, 00:08:16.804 "state": "online", 00:08:16.804 "raid_level": "raid0", 00:08:16.804 "superblock": false, 00:08:16.804 "num_base_bdevs": 4, 00:08:16.804 "num_base_bdevs_discovered": 4, 00:08:16.804 "num_base_bdevs_operational": 4, 00:08:16.804 "base_bdevs_list": [ 00:08:16.804 { 00:08:16.804 "name": "NewBaseBdev", 00:08:16.804 "uuid": "3038d264-4bf1-41a6-a63e-24247ad9425d", 00:08:16.804 "is_configured": true, 00:08:16.804 "data_offset": 0, 00:08:16.804 "data_size": 65536 00:08:16.804 }, 00:08:16.804 { 00:08:16.804 "name": "BaseBdev2", 00:08:16.804 "uuid": "710f64cc-b1de-482e-bb2d-99cd15302075", 00:08:16.804 "is_configured": true, 00:08:16.804 "data_offset": 0, 00:08:16.804 "data_size": 65536 00:08:16.804 }, 00:08:16.804 { 00:08:16.804 "name": "BaseBdev3", 00:08:16.804 "uuid": "72b8d6f7-81be-4254-a9e5-ef0e6427dbef", 00:08:16.804 "is_configured": true, 00:08:16.804 "data_offset": 0, 00:08:16.805 "data_size": 65536 00:08:16.805 }, 00:08:16.805 { 00:08:16.805 "name": "BaseBdev4", 00:08:16.805 "uuid": "b9551469-1b65-40f0-92d1-66601ccb6091", 00:08:16.805 "is_configured": true, 00:08:16.805 "data_offset": 0, 00:08:16.805 "data_size": 65536 00:08:16.805 } 00:08:16.805 ] 00:08:16.805 } 00:08:16.805 } 00:08:16.805 }' 00:08:16.805 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.805 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:16.805 BaseBdev2 00:08:16.805 BaseBdev3 00:08:16.805 BaseBdev4' 00:08:16.805 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.805 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.805 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.067 [2024-10-01 14:33:08.618668] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.067 [2024-10-01 14:33:08.618792] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.067 [2024-10-01 14:33:08.618871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.067 [2024-10-01 14:33:08.618935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.067 [2024-10-01 14:33:08.618946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67870 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67870 ']' 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67870 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67870 00:08:17.067 killing process with pid 67870 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67870' 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67870 00:08:17.067 [2024-10-01 14:33:08.654417] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.067 14:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67870 00:08:17.328 [2024-10-01 14:33:08.902190] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:18.270 00:08:18.270 real 0m8.523s 00:08:18.270 user 0m13.448s 00:08:18.270 sys 0m1.435s 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.270 ************************************ 00:08:18.270 END TEST raid_state_function_test 00:08:18.270 ************************************ 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.270 14:33:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:08:18.270 14:33:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:18.270 14:33:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.270 14:33:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.270 ************************************ 00:08:18.270 START TEST raid_state_function_test_sb 00:08:18.270 ************************************ 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.270 Process raid pid: 68508 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68508 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68508' 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68508 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 68508 ']' 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:18.270 14:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.270 [2024-10-01 14:33:09.873229] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:18.270 [2024-10-01 14:33:09.873525] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.531 [2024-10-01 14:33:10.026097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.792 [2024-10-01 14:33:10.218807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.792 [2024-10-01 14:33:10.356230] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.792 [2024-10-01 14:33:10.356265] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.052 [2024-10-01 14:33:10.722735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:19.052 [2024-10-01 14:33:10.722784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:19.052 [2024-10-01 14:33:10.722794] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.052 [2024-10-01 14:33:10.722804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.052 [2024-10-01 14:33:10.722810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.052 [2024-10-01 14:33:10.722820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.052 [2024-10-01 14:33:10.722826] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:19.052 [2024-10-01 14:33:10.722834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.052 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.311 14:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.311 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.311 "name": "Existed_Raid", 00:08:19.311 "uuid": "5729fef0-711f-4b74-aa0a-d235f18f21da", 00:08:19.311 "strip_size_kb": 64, 00:08:19.311 "state": "configuring", 00:08:19.311 "raid_level": "raid0", 00:08:19.311 "superblock": true, 00:08:19.311 "num_base_bdevs": 4, 00:08:19.311 "num_base_bdevs_discovered": 0, 00:08:19.311 "num_base_bdevs_operational": 4, 00:08:19.311 "base_bdevs_list": [ 00:08:19.311 { 00:08:19.311 "name": "BaseBdev1", 00:08:19.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.311 "is_configured": false, 00:08:19.311 "data_offset": 0, 00:08:19.311 "data_size": 0 00:08:19.311 }, 00:08:19.311 { 00:08:19.311 "name": "BaseBdev2", 00:08:19.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.311 "is_configured": false, 00:08:19.311 "data_offset": 0, 00:08:19.311 "data_size": 0 00:08:19.311 }, 00:08:19.311 { 00:08:19.311 "name": "BaseBdev3", 00:08:19.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.311 "is_configured": false, 00:08:19.311 "data_offset": 0, 00:08:19.311 "data_size": 0 00:08:19.311 }, 00:08:19.311 { 00:08:19.311 "name": "BaseBdev4", 00:08:19.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.311 "is_configured": false, 00:08:19.311 "data_offset": 0, 00:08:19.311 "data_size": 0 00:08:19.311 } 00:08:19.311 ] 00:08:19.311 }' 00:08:19.311 14:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.311 14:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.570 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.570 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.570 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.570 [2024-10-01 14:33:11.050726] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.570 [2024-10-01 14:33:11.050762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:19.570 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.570 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:19.570 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.570 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.570 [2024-10-01 14:33:11.058752] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:19.570 [2024-10-01 14:33:11.058789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:19.570 [2024-10-01 14:33:11.058797] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.570 [2024-10-01 14:33:11.058806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.570 [2024-10-01 14:33:11.058812] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.570 [2024-10-01 14:33:11.058820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.570 [2024-10-01 14:33:11.058827] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:19.570 [2024-10-01 14:33:11.058836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.571 [2024-10-01 14:33:11.110445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.571 BaseBdev1 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.571 [ 00:08:19.571 { 00:08:19.571 "name": "BaseBdev1", 00:08:19.571 "aliases": [ 00:08:19.571 "25c180e2-6a2b-4855-b12f-179e74d7035f" 00:08:19.571 ], 00:08:19.571 "product_name": "Malloc disk", 00:08:19.571 "block_size": 512, 00:08:19.571 "num_blocks": 65536, 00:08:19.571 "uuid": "25c180e2-6a2b-4855-b12f-179e74d7035f", 00:08:19.571 "assigned_rate_limits": { 00:08:19.571 "rw_ios_per_sec": 0, 00:08:19.571 "rw_mbytes_per_sec": 0, 00:08:19.571 "r_mbytes_per_sec": 0, 00:08:19.571 "w_mbytes_per_sec": 0 00:08:19.571 }, 00:08:19.571 "claimed": true, 00:08:19.571 "claim_type": "exclusive_write", 00:08:19.571 "zoned": false, 00:08:19.571 "supported_io_types": { 00:08:19.571 "read": true, 00:08:19.571 "write": true, 00:08:19.571 "unmap": true, 00:08:19.571 "flush": true, 00:08:19.571 "reset": true, 00:08:19.571 "nvme_admin": false, 00:08:19.571 "nvme_io": false, 00:08:19.571 "nvme_io_md": false, 00:08:19.571 "write_zeroes": true, 00:08:19.571 "zcopy": true, 00:08:19.571 "get_zone_info": false, 00:08:19.571 "zone_management": false, 00:08:19.571 "zone_append": false, 00:08:19.571 "compare": false, 00:08:19.571 "compare_and_write": false, 00:08:19.571 "abort": true, 00:08:19.571 "seek_hole": false, 00:08:19.571 "seek_data": false, 00:08:19.571 "copy": true, 00:08:19.571 "nvme_iov_md": false 00:08:19.571 }, 00:08:19.571 "memory_domains": [ 00:08:19.571 { 00:08:19.571 "dma_device_id": "system", 00:08:19.571 "dma_device_type": 1 00:08:19.571 }, 00:08:19.571 { 00:08:19.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.571 "dma_device_type": 2 00:08:19.571 } 00:08:19.571 ], 00:08:19.571 "driver_specific": {} 00:08:19.571 } 00:08:19.571 ] 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.571 "name": "Existed_Raid", 00:08:19.571 "uuid": "2e178973-7e43-4926-9244-fd3f9a4ffaeb", 00:08:19.571 "strip_size_kb": 64, 00:08:19.571 "state": "configuring", 00:08:19.571 "raid_level": "raid0", 00:08:19.571 "superblock": true, 00:08:19.571 "num_base_bdevs": 4, 00:08:19.571 "num_base_bdevs_discovered": 1, 00:08:19.571 "num_base_bdevs_operational": 4, 00:08:19.571 "base_bdevs_list": [ 00:08:19.571 { 00:08:19.571 "name": "BaseBdev1", 00:08:19.571 "uuid": "25c180e2-6a2b-4855-b12f-179e74d7035f", 00:08:19.571 "is_configured": true, 00:08:19.571 "data_offset": 2048, 00:08:19.571 "data_size": 63488 00:08:19.571 }, 00:08:19.571 { 00:08:19.571 "name": "BaseBdev2", 00:08:19.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.571 "is_configured": false, 00:08:19.571 "data_offset": 0, 00:08:19.571 "data_size": 0 00:08:19.571 }, 00:08:19.571 { 00:08:19.571 "name": "BaseBdev3", 00:08:19.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.571 "is_configured": false, 00:08:19.571 "data_offset": 0, 00:08:19.571 "data_size": 0 00:08:19.571 }, 00:08:19.571 { 00:08:19.571 "name": "BaseBdev4", 00:08:19.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.571 "is_configured": false, 00:08:19.571 "data_offset": 0, 00:08:19.571 "data_size": 0 00:08:19.571 } 00:08:19.571 ] 00:08:19.571 }' 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.571 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.830 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.830 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.830 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.830 [2024-10-01 14:33:11.470569] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.830 [2024-10-01 14:33:11.470617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:19.830 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.830 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:19.830 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.830 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.830 [2024-10-01 14:33:11.482627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.830 [2024-10-01 14:33:11.484573] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.831 [2024-10-01 14:33:11.484697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.831 [2024-10-01 14:33:11.484768] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.831 [2024-10-01 14:33:11.484800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.831 [2024-10-01 14:33:11.484822] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:19.831 [2024-10-01 14:33:11.484845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.831 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.090 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.090 "name": "Existed_Raid", 00:08:20.090 "uuid": "d6188022-703f-4032-9a67-33a4f0c4faac", 00:08:20.090 "strip_size_kb": 64, 00:08:20.090 "state": "configuring", 00:08:20.090 "raid_level": "raid0", 00:08:20.090 "superblock": true, 00:08:20.090 "num_base_bdevs": 4, 00:08:20.090 "num_base_bdevs_discovered": 1, 00:08:20.090 "num_base_bdevs_operational": 4, 00:08:20.090 "base_bdevs_list": [ 00:08:20.090 { 00:08:20.090 "name": "BaseBdev1", 00:08:20.090 "uuid": "25c180e2-6a2b-4855-b12f-179e74d7035f", 00:08:20.090 "is_configured": true, 00:08:20.090 "data_offset": 2048, 00:08:20.090 "data_size": 63488 00:08:20.090 }, 00:08:20.090 { 00:08:20.090 "name": "BaseBdev2", 00:08:20.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.090 "is_configured": false, 00:08:20.090 "data_offset": 0, 00:08:20.090 "data_size": 0 00:08:20.090 }, 00:08:20.090 { 00:08:20.090 "name": "BaseBdev3", 00:08:20.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.090 "is_configured": false, 00:08:20.090 "data_offset": 0, 00:08:20.090 "data_size": 0 00:08:20.090 }, 00:08:20.090 { 00:08:20.090 "name": "BaseBdev4", 00:08:20.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.090 "is_configured": false, 00:08:20.090 "data_offset": 0, 00:08:20.090 "data_size": 0 00:08:20.090 } 00:08:20.090 ] 00:08:20.090 }' 00:08:20.090 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.090 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.351 [2024-10-01 14:33:11.833375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.351 BaseBdev2 00:08:20.351 [ 00:08:20.351 { 00:08:20.351 "name": "BaseBdev2", 00:08:20.351 "aliases": [ 00:08:20.351 "8740ff94-5b46-4427-89d7-5aad70b1ec29" 00:08:20.351 ], 00:08:20.351 "product_name": "Malloc disk", 00:08:20.351 "block_size": 512, 00:08:20.351 "num_blocks": 65536, 00:08:20.351 "uuid": "8740ff94-5b46-4427-89d7-5aad70b1ec29", 00:08:20.351 "assigned_rate_limits": { 00:08:20.351 "rw_ios_per_sec": 0, 00:08:20.351 "rw_mbytes_per_sec": 0, 00:08:20.351 "r_mbytes_per_sec": 0, 00:08:20.351 "w_mbytes_per_sec": 0 00:08:20.351 }, 00:08:20.351 "claimed": true, 00:08:20.351 "claim_type": "exclusive_write", 00:08:20.351 "zoned": false, 00:08:20.351 "supported_io_types": { 00:08:20.351 "read": true, 00:08:20.351 "write": true, 00:08:20.351 "unmap": true, 00:08:20.351 "flush": true, 00:08:20.351 "reset": true, 00:08:20.351 "nvme_admin": false, 00:08:20.351 "nvme_io": false, 00:08:20.351 "nvme_io_md": false, 00:08:20.351 "write_zeroes": true, 00:08:20.351 "zcopy": true, 00:08:20.351 "get_zone_info": false, 00:08:20.351 "zone_management": false, 00:08:20.351 "zone_append": false, 00:08:20.351 "compare": false, 00:08:20.351 "compare_and_write": false, 00:08:20.351 "abort": true, 00:08:20.351 "seek_hole": false, 00:08:20.351 "seek_data": false, 00:08:20.351 "copy": true, 00:08:20.351 "nvme_iov_md": false 00:08:20.351 }, 00:08:20.351 "memory_domains": [ 00:08:20.351 { 00:08:20.351 "dma_device_id": "system", 00:08:20.351 "dma_device_type": 1 00:08:20.351 }, 00:08:20.351 { 00:08:20.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.351 "dma_device_type": 2 00:08:20.351 } 00:08:20.351 ], 00:08:20.351 "driver_specific": {} 00:08:20.351 } 00:08:20.351 ] 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.351 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.351 "name": "Existed_Raid", 00:08:20.351 "uuid": "d6188022-703f-4032-9a67-33a4f0c4faac", 00:08:20.351 "strip_size_kb": 64, 00:08:20.351 "state": "configuring", 00:08:20.351 "raid_level": "raid0", 00:08:20.351 "superblock": true, 00:08:20.351 "num_base_bdevs": 4, 00:08:20.351 "num_base_bdevs_discovered": 2, 00:08:20.351 "num_base_bdevs_operational": 4, 00:08:20.351 "base_bdevs_list": [ 00:08:20.351 { 00:08:20.351 "name": "BaseBdev1", 00:08:20.351 "uuid": "25c180e2-6a2b-4855-b12f-179e74d7035f", 00:08:20.351 "is_configured": true, 00:08:20.351 "data_offset": 2048, 00:08:20.351 "data_size": 63488 00:08:20.351 }, 00:08:20.351 { 00:08:20.351 "name": "BaseBdev2", 00:08:20.351 "uuid": "8740ff94-5b46-4427-89d7-5aad70b1ec29", 00:08:20.351 "is_configured": true, 00:08:20.351 "data_offset": 2048, 00:08:20.351 "data_size": 63488 00:08:20.351 }, 00:08:20.351 { 00:08:20.351 "name": "BaseBdev3", 00:08:20.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.351 "is_configured": false, 00:08:20.351 "data_offset": 0, 00:08:20.352 "data_size": 0 00:08:20.352 }, 00:08:20.352 { 00:08:20.352 "name": "BaseBdev4", 00:08:20.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.352 "is_configured": false, 00:08:20.352 "data_offset": 0, 00:08:20.352 "data_size": 0 00:08:20.352 } 00:08:20.352 ] 00:08:20.352 }' 00:08:20.352 14:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.352 14:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.616 [2024-10-01 14:33:12.204899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:20.616 BaseBdev3 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.616 [ 00:08:20.616 { 00:08:20.616 "name": "BaseBdev3", 00:08:20.616 "aliases": [ 00:08:20.616 "058f346c-f67c-4d2f-a96b-5484d32af529" 00:08:20.616 ], 00:08:20.616 "product_name": "Malloc disk", 00:08:20.616 "block_size": 512, 00:08:20.616 "num_blocks": 65536, 00:08:20.616 "uuid": "058f346c-f67c-4d2f-a96b-5484d32af529", 00:08:20.616 "assigned_rate_limits": { 00:08:20.616 "rw_ios_per_sec": 0, 00:08:20.616 "rw_mbytes_per_sec": 0, 00:08:20.616 "r_mbytes_per_sec": 0, 00:08:20.616 "w_mbytes_per_sec": 0 00:08:20.616 }, 00:08:20.616 "claimed": true, 00:08:20.616 "claim_type": "exclusive_write", 00:08:20.616 "zoned": false, 00:08:20.616 "supported_io_types": { 00:08:20.616 "read": true, 00:08:20.616 "write": true, 00:08:20.616 "unmap": true, 00:08:20.616 "flush": true, 00:08:20.616 "reset": true, 00:08:20.616 "nvme_admin": false, 00:08:20.616 "nvme_io": false, 00:08:20.616 "nvme_io_md": false, 00:08:20.616 "write_zeroes": true, 00:08:20.616 "zcopy": true, 00:08:20.616 "get_zone_info": false, 00:08:20.616 "zone_management": false, 00:08:20.616 "zone_append": false, 00:08:20.616 "compare": false, 00:08:20.616 "compare_and_write": false, 00:08:20.616 "abort": true, 00:08:20.616 "seek_hole": false, 00:08:20.616 "seek_data": false, 00:08:20.616 "copy": true, 00:08:20.616 "nvme_iov_md": false 00:08:20.616 }, 00:08:20.616 "memory_domains": [ 00:08:20.616 { 00:08:20.616 "dma_device_id": "system", 00:08:20.616 "dma_device_type": 1 00:08:20.616 }, 00:08:20.616 { 00:08:20.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.616 "dma_device_type": 2 00:08:20.616 } 00:08:20.616 ], 00:08:20.616 "driver_specific": {} 00:08:20.616 } 00:08:20.616 ] 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.616 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.616 "name": "Existed_Raid", 00:08:20.616 "uuid": "d6188022-703f-4032-9a67-33a4f0c4faac", 00:08:20.616 "strip_size_kb": 64, 00:08:20.616 "state": "configuring", 00:08:20.616 "raid_level": "raid0", 00:08:20.616 "superblock": true, 00:08:20.616 "num_base_bdevs": 4, 00:08:20.616 "num_base_bdevs_discovered": 3, 00:08:20.616 "num_base_bdevs_operational": 4, 00:08:20.616 "base_bdevs_list": [ 00:08:20.616 { 00:08:20.616 "name": "BaseBdev1", 00:08:20.616 "uuid": "25c180e2-6a2b-4855-b12f-179e74d7035f", 00:08:20.617 "is_configured": true, 00:08:20.617 "data_offset": 2048, 00:08:20.617 "data_size": 63488 00:08:20.617 }, 00:08:20.617 { 00:08:20.617 "name": "BaseBdev2", 00:08:20.617 "uuid": "8740ff94-5b46-4427-89d7-5aad70b1ec29", 00:08:20.617 "is_configured": true, 00:08:20.617 "data_offset": 2048, 00:08:20.617 "data_size": 63488 00:08:20.617 }, 00:08:20.617 { 00:08:20.617 "name": "BaseBdev3", 00:08:20.617 "uuid": "058f346c-f67c-4d2f-a96b-5484d32af529", 00:08:20.617 "is_configured": true, 00:08:20.617 "data_offset": 2048, 00:08:20.617 "data_size": 63488 00:08:20.617 }, 00:08:20.617 { 00:08:20.617 "name": "BaseBdev4", 00:08:20.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.617 "is_configured": false, 00:08:20.617 "data_offset": 0, 00:08:20.617 "data_size": 0 00:08:20.617 } 00:08:20.617 ] 00:08:20.617 }' 00:08:20.617 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.617 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.877 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:20.877 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.877 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.139 [2024-10-01 14:33:12.576264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:21.139 [2024-10-01 14:33:12.576511] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:21.139 [2024-10-01 14:33:12.576526] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:21.139 [2024-10-01 14:33:12.576808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:21.139 [2024-10-01 14:33:12.576942] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:21.140 [2024-10-01 14:33:12.576954] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:21.140 BaseBdev4 00:08:21.140 [2024-10-01 14:33:12.577081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.140 [ 00:08:21.140 { 00:08:21.140 "name": "BaseBdev4", 00:08:21.140 "aliases": [ 00:08:21.140 "248ef37a-a719-4138-a1f9-97a450a0b001" 00:08:21.140 ], 00:08:21.140 "product_name": "Malloc disk", 00:08:21.140 "block_size": 512, 00:08:21.140 "num_blocks": 65536, 00:08:21.140 "uuid": "248ef37a-a719-4138-a1f9-97a450a0b001", 00:08:21.140 "assigned_rate_limits": { 00:08:21.140 "rw_ios_per_sec": 0, 00:08:21.140 "rw_mbytes_per_sec": 0, 00:08:21.140 "r_mbytes_per_sec": 0, 00:08:21.140 "w_mbytes_per_sec": 0 00:08:21.140 }, 00:08:21.140 "claimed": true, 00:08:21.140 "claim_type": "exclusive_write", 00:08:21.140 "zoned": false, 00:08:21.140 "supported_io_types": { 00:08:21.140 "read": true, 00:08:21.140 "write": true, 00:08:21.140 "unmap": true, 00:08:21.140 "flush": true, 00:08:21.140 "reset": true, 00:08:21.140 "nvme_admin": false, 00:08:21.140 "nvme_io": false, 00:08:21.140 "nvme_io_md": false, 00:08:21.140 "write_zeroes": true, 00:08:21.140 "zcopy": true, 00:08:21.140 "get_zone_info": false, 00:08:21.140 "zone_management": false, 00:08:21.140 "zone_append": false, 00:08:21.140 "compare": false, 00:08:21.140 "compare_and_write": false, 00:08:21.140 "abort": true, 00:08:21.140 "seek_hole": false, 00:08:21.140 "seek_data": false, 00:08:21.140 "copy": true, 00:08:21.140 "nvme_iov_md": false 00:08:21.140 }, 00:08:21.140 "memory_domains": [ 00:08:21.140 { 00:08:21.140 "dma_device_id": "system", 00:08:21.140 "dma_device_type": 1 00:08:21.140 }, 00:08:21.140 { 00:08:21.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.140 "dma_device_type": 2 00:08:21.140 } 00:08:21.140 ], 00:08:21.140 "driver_specific": {} 00:08:21.140 } 00:08:21.140 ] 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.140 "name": "Existed_Raid", 00:08:21.140 "uuid": "d6188022-703f-4032-9a67-33a4f0c4faac", 00:08:21.140 "strip_size_kb": 64, 00:08:21.140 "state": "online", 00:08:21.140 "raid_level": "raid0", 00:08:21.140 "superblock": true, 00:08:21.140 "num_base_bdevs": 4, 00:08:21.140 "num_base_bdevs_discovered": 4, 00:08:21.140 "num_base_bdevs_operational": 4, 00:08:21.140 "base_bdevs_list": [ 00:08:21.140 { 00:08:21.140 "name": "BaseBdev1", 00:08:21.140 "uuid": "25c180e2-6a2b-4855-b12f-179e74d7035f", 00:08:21.140 "is_configured": true, 00:08:21.140 "data_offset": 2048, 00:08:21.140 "data_size": 63488 00:08:21.140 }, 00:08:21.140 { 00:08:21.140 "name": "BaseBdev2", 00:08:21.140 "uuid": "8740ff94-5b46-4427-89d7-5aad70b1ec29", 00:08:21.140 "is_configured": true, 00:08:21.140 "data_offset": 2048, 00:08:21.140 "data_size": 63488 00:08:21.140 }, 00:08:21.140 { 00:08:21.140 "name": "BaseBdev3", 00:08:21.140 "uuid": "058f346c-f67c-4d2f-a96b-5484d32af529", 00:08:21.140 "is_configured": true, 00:08:21.140 "data_offset": 2048, 00:08:21.140 "data_size": 63488 00:08:21.140 }, 00:08:21.140 { 00:08:21.140 "name": "BaseBdev4", 00:08:21.140 "uuid": "248ef37a-a719-4138-a1f9-97a450a0b001", 00:08:21.140 "is_configured": true, 00:08:21.140 "data_offset": 2048, 00:08:21.140 "data_size": 63488 00:08:21.140 } 00:08:21.140 ] 00:08:21.140 }' 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.140 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.402 [2024-10-01 14:33:12.916758] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.402 "name": "Existed_Raid", 00:08:21.402 "aliases": [ 00:08:21.402 "d6188022-703f-4032-9a67-33a4f0c4faac" 00:08:21.402 ], 00:08:21.402 "product_name": "Raid Volume", 00:08:21.402 "block_size": 512, 00:08:21.402 "num_blocks": 253952, 00:08:21.402 "uuid": "d6188022-703f-4032-9a67-33a4f0c4faac", 00:08:21.402 "assigned_rate_limits": { 00:08:21.402 "rw_ios_per_sec": 0, 00:08:21.402 "rw_mbytes_per_sec": 0, 00:08:21.402 "r_mbytes_per_sec": 0, 00:08:21.402 "w_mbytes_per_sec": 0 00:08:21.402 }, 00:08:21.402 "claimed": false, 00:08:21.402 "zoned": false, 00:08:21.402 "supported_io_types": { 00:08:21.402 "read": true, 00:08:21.402 "write": true, 00:08:21.402 "unmap": true, 00:08:21.402 "flush": true, 00:08:21.402 "reset": true, 00:08:21.402 "nvme_admin": false, 00:08:21.402 "nvme_io": false, 00:08:21.402 "nvme_io_md": false, 00:08:21.402 "write_zeroes": true, 00:08:21.402 "zcopy": false, 00:08:21.402 "get_zone_info": false, 00:08:21.402 "zone_management": false, 00:08:21.402 "zone_append": false, 00:08:21.402 "compare": false, 00:08:21.402 "compare_and_write": false, 00:08:21.402 "abort": false, 00:08:21.402 "seek_hole": false, 00:08:21.402 "seek_data": false, 00:08:21.402 "copy": false, 00:08:21.402 "nvme_iov_md": false 00:08:21.402 }, 00:08:21.402 "memory_domains": [ 00:08:21.402 { 00:08:21.402 "dma_device_id": "system", 00:08:21.402 "dma_device_type": 1 00:08:21.402 }, 00:08:21.402 { 00:08:21.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.402 "dma_device_type": 2 00:08:21.402 }, 00:08:21.402 { 00:08:21.402 "dma_device_id": "system", 00:08:21.402 "dma_device_type": 1 00:08:21.402 }, 00:08:21.402 { 00:08:21.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.402 "dma_device_type": 2 00:08:21.402 }, 00:08:21.402 { 00:08:21.402 "dma_device_id": "system", 00:08:21.402 "dma_device_type": 1 00:08:21.402 }, 00:08:21.402 { 00:08:21.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.402 "dma_device_type": 2 00:08:21.402 }, 00:08:21.402 { 00:08:21.402 "dma_device_id": "system", 00:08:21.402 "dma_device_type": 1 00:08:21.402 }, 00:08:21.402 { 00:08:21.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.402 "dma_device_type": 2 00:08:21.402 } 00:08:21.402 ], 00:08:21.402 "driver_specific": { 00:08:21.402 "raid": { 00:08:21.402 "uuid": "d6188022-703f-4032-9a67-33a4f0c4faac", 00:08:21.402 "strip_size_kb": 64, 00:08:21.402 "state": "online", 00:08:21.402 "raid_level": "raid0", 00:08:21.402 "superblock": true, 00:08:21.402 "num_base_bdevs": 4, 00:08:21.402 "num_base_bdevs_discovered": 4, 00:08:21.402 "num_base_bdevs_operational": 4, 00:08:21.402 "base_bdevs_list": [ 00:08:21.402 { 00:08:21.402 "name": "BaseBdev1", 00:08:21.402 "uuid": "25c180e2-6a2b-4855-b12f-179e74d7035f", 00:08:21.402 "is_configured": true, 00:08:21.402 "data_offset": 2048, 00:08:21.402 "data_size": 63488 00:08:21.402 }, 00:08:21.402 { 00:08:21.402 "name": "BaseBdev2", 00:08:21.402 "uuid": "8740ff94-5b46-4427-89d7-5aad70b1ec29", 00:08:21.402 "is_configured": true, 00:08:21.402 "data_offset": 2048, 00:08:21.402 "data_size": 63488 00:08:21.402 }, 00:08:21.402 { 00:08:21.402 "name": "BaseBdev3", 00:08:21.402 "uuid": "058f346c-f67c-4d2f-a96b-5484d32af529", 00:08:21.402 "is_configured": true, 00:08:21.402 "data_offset": 2048, 00:08:21.402 "data_size": 63488 00:08:21.402 }, 00:08:21.402 { 00:08:21.402 "name": "BaseBdev4", 00:08:21.402 "uuid": "248ef37a-a719-4138-a1f9-97a450a0b001", 00:08:21.402 "is_configured": true, 00:08:21.402 "data_offset": 2048, 00:08:21.402 "data_size": 63488 00:08:21.402 } 00:08:21.402 ] 00:08:21.402 } 00:08:21.402 } 00:08:21.402 }' 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:21.402 BaseBdev2 00:08:21.402 BaseBdev3 00:08:21.402 BaseBdev4' 00:08:21.402 14:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.402 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.663 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.664 [2024-10-01 14:33:13.140497] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:21.664 [2024-10-01 14:33:13.140527] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.664 [2024-10-01 14:33:13.140579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.664 "name": "Existed_Raid", 00:08:21.664 "uuid": "d6188022-703f-4032-9a67-33a4f0c4faac", 00:08:21.664 "strip_size_kb": 64, 00:08:21.664 "state": "offline", 00:08:21.664 "raid_level": "raid0", 00:08:21.664 "superblock": true, 00:08:21.664 "num_base_bdevs": 4, 00:08:21.664 "num_base_bdevs_discovered": 3, 00:08:21.664 "num_base_bdevs_operational": 3, 00:08:21.664 "base_bdevs_list": [ 00:08:21.664 { 00:08:21.664 "name": null, 00:08:21.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.664 "is_configured": false, 00:08:21.664 "data_offset": 0, 00:08:21.664 "data_size": 63488 00:08:21.664 }, 00:08:21.664 { 00:08:21.664 "name": "BaseBdev2", 00:08:21.664 "uuid": "8740ff94-5b46-4427-89d7-5aad70b1ec29", 00:08:21.664 "is_configured": true, 00:08:21.664 "data_offset": 2048, 00:08:21.664 "data_size": 63488 00:08:21.664 }, 00:08:21.664 { 00:08:21.664 "name": "BaseBdev3", 00:08:21.664 "uuid": "058f346c-f67c-4d2f-a96b-5484d32af529", 00:08:21.664 "is_configured": true, 00:08:21.664 "data_offset": 2048, 00:08:21.664 "data_size": 63488 00:08:21.664 }, 00:08:21.664 { 00:08:21.664 "name": "BaseBdev4", 00:08:21.664 "uuid": "248ef37a-a719-4138-a1f9-97a450a0b001", 00:08:21.664 "is_configured": true, 00:08:21.664 "data_offset": 2048, 00:08:21.664 "data_size": 63488 00:08:21.664 } 00:08:21.664 ] 00:08:21.664 }' 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.664 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.926 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:21.926 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:21.926 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:21.926 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.926 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.926 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.926 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.926 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:21.926 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:21.926 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:21.926 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.926 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.926 [2024-10-01 14:33:13.572584] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.187 [2024-10-01 14:33:13.675276] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.187 [2024-10-01 14:33:13.778136] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:22.187 [2024-10-01 14:33:13.778194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.187 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.448 BaseBdev2 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.448 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.448 [ 00:08:22.448 { 00:08:22.448 "name": "BaseBdev2", 00:08:22.448 "aliases": [ 00:08:22.448 "27838a81-acb0-48db-8a8b-7f37ec14a32a" 00:08:22.448 ], 00:08:22.448 "product_name": "Malloc disk", 00:08:22.448 "block_size": 512, 00:08:22.449 "num_blocks": 65536, 00:08:22.449 "uuid": "27838a81-acb0-48db-8a8b-7f37ec14a32a", 00:08:22.449 "assigned_rate_limits": { 00:08:22.449 "rw_ios_per_sec": 0, 00:08:22.449 "rw_mbytes_per_sec": 0, 00:08:22.449 "r_mbytes_per_sec": 0, 00:08:22.449 "w_mbytes_per_sec": 0 00:08:22.449 }, 00:08:22.449 "claimed": false, 00:08:22.449 "zoned": false, 00:08:22.449 "supported_io_types": { 00:08:22.449 "read": true, 00:08:22.449 "write": true, 00:08:22.449 "unmap": true, 00:08:22.449 "flush": true, 00:08:22.449 "reset": true, 00:08:22.449 "nvme_admin": false, 00:08:22.449 "nvme_io": false, 00:08:22.449 "nvme_io_md": false, 00:08:22.449 "write_zeroes": true, 00:08:22.449 "zcopy": true, 00:08:22.449 "get_zone_info": false, 00:08:22.449 "zone_management": false, 00:08:22.449 "zone_append": false, 00:08:22.449 "compare": false, 00:08:22.449 "compare_and_write": false, 00:08:22.449 "abort": true, 00:08:22.449 "seek_hole": false, 00:08:22.449 "seek_data": false, 00:08:22.449 "copy": true, 00:08:22.449 "nvme_iov_md": false 00:08:22.449 }, 00:08:22.449 "memory_domains": [ 00:08:22.449 { 00:08:22.449 "dma_device_id": "system", 00:08:22.449 "dma_device_type": 1 00:08:22.449 }, 00:08:22.449 { 00:08:22.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.449 "dma_device_type": 2 00:08:22.449 } 00:08:22.449 ], 00:08:22.449 "driver_specific": {} 00:08:22.449 } 00:08:22.449 ] 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.449 BaseBdev3 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.449 [ 00:08:22.449 { 00:08:22.449 "name": "BaseBdev3", 00:08:22.449 "aliases": [ 00:08:22.449 "ef746ec2-e56b-4e3f-a564-6615ffff137f" 00:08:22.449 ], 00:08:22.449 "product_name": "Malloc disk", 00:08:22.449 "block_size": 512, 00:08:22.449 "num_blocks": 65536, 00:08:22.449 "uuid": "ef746ec2-e56b-4e3f-a564-6615ffff137f", 00:08:22.449 "assigned_rate_limits": { 00:08:22.449 "rw_ios_per_sec": 0, 00:08:22.449 "rw_mbytes_per_sec": 0, 00:08:22.449 "r_mbytes_per_sec": 0, 00:08:22.449 "w_mbytes_per_sec": 0 00:08:22.449 }, 00:08:22.449 "claimed": false, 00:08:22.449 "zoned": false, 00:08:22.449 "supported_io_types": { 00:08:22.449 "read": true, 00:08:22.449 "write": true, 00:08:22.449 "unmap": true, 00:08:22.449 "flush": true, 00:08:22.449 "reset": true, 00:08:22.449 "nvme_admin": false, 00:08:22.449 "nvme_io": false, 00:08:22.449 "nvme_io_md": false, 00:08:22.449 "write_zeroes": true, 00:08:22.449 "zcopy": true, 00:08:22.449 "get_zone_info": false, 00:08:22.449 "zone_management": false, 00:08:22.449 "zone_append": false, 00:08:22.449 "compare": false, 00:08:22.449 "compare_and_write": false, 00:08:22.449 "abort": true, 00:08:22.449 "seek_hole": false, 00:08:22.449 "seek_data": false, 00:08:22.449 "copy": true, 00:08:22.449 "nvme_iov_md": false 00:08:22.449 }, 00:08:22.449 "memory_domains": [ 00:08:22.449 { 00:08:22.449 "dma_device_id": "system", 00:08:22.449 "dma_device_type": 1 00:08:22.449 }, 00:08:22.449 { 00:08:22.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.449 "dma_device_type": 2 00:08:22.449 } 00:08:22.449 ], 00:08:22.449 "driver_specific": {} 00:08:22.449 } 00:08:22.449 ] 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.449 14:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.449 BaseBdev4 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.449 [ 00:08:22.449 { 00:08:22.449 "name": "BaseBdev4", 00:08:22.449 "aliases": [ 00:08:22.449 "786a5d6f-56a9-483b-b57f-4c6865f4cdf5" 00:08:22.449 ], 00:08:22.449 "product_name": "Malloc disk", 00:08:22.449 "block_size": 512, 00:08:22.449 "num_blocks": 65536, 00:08:22.449 "uuid": "786a5d6f-56a9-483b-b57f-4c6865f4cdf5", 00:08:22.449 "assigned_rate_limits": { 00:08:22.449 "rw_ios_per_sec": 0, 00:08:22.449 "rw_mbytes_per_sec": 0, 00:08:22.449 "r_mbytes_per_sec": 0, 00:08:22.449 "w_mbytes_per_sec": 0 00:08:22.449 }, 00:08:22.449 "claimed": false, 00:08:22.449 "zoned": false, 00:08:22.449 "supported_io_types": { 00:08:22.449 "read": true, 00:08:22.449 "write": true, 00:08:22.449 "unmap": true, 00:08:22.449 "flush": true, 00:08:22.449 "reset": true, 00:08:22.449 "nvme_admin": false, 00:08:22.449 "nvme_io": false, 00:08:22.449 "nvme_io_md": false, 00:08:22.449 "write_zeroes": true, 00:08:22.449 "zcopy": true, 00:08:22.449 "get_zone_info": false, 00:08:22.449 "zone_management": false, 00:08:22.449 "zone_append": false, 00:08:22.449 "compare": false, 00:08:22.449 "compare_and_write": false, 00:08:22.449 "abort": true, 00:08:22.449 "seek_hole": false, 00:08:22.449 "seek_data": false, 00:08:22.449 "copy": true, 00:08:22.449 "nvme_iov_md": false 00:08:22.449 }, 00:08:22.449 "memory_domains": [ 00:08:22.449 { 00:08:22.449 "dma_device_id": "system", 00:08:22.449 "dma_device_type": 1 00:08:22.449 }, 00:08:22.449 { 00:08:22.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.449 "dma_device_type": 2 00:08:22.449 } 00:08:22.449 ], 00:08:22.449 "driver_specific": {} 00:08:22.449 } 00:08:22.449 ] 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.449 [2024-10-01 14:33:14.058762] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.449 [2024-10-01 14:33:14.058899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.449 [2024-10-01 14:33:14.058971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.449 [2024-10-01 14:33:14.060827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.449 [2024-10-01 14:33:14.060873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:22.449 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.450 "name": "Existed_Raid", 00:08:22.450 "uuid": "7d6eee76-b9a2-4f74-b94d-72735363ad50", 00:08:22.450 "strip_size_kb": 64, 00:08:22.450 "state": "configuring", 00:08:22.450 "raid_level": "raid0", 00:08:22.450 "superblock": true, 00:08:22.450 "num_base_bdevs": 4, 00:08:22.450 "num_base_bdevs_discovered": 3, 00:08:22.450 "num_base_bdevs_operational": 4, 00:08:22.450 "base_bdevs_list": [ 00:08:22.450 { 00:08:22.450 "name": "BaseBdev1", 00:08:22.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.450 "is_configured": false, 00:08:22.450 "data_offset": 0, 00:08:22.450 "data_size": 0 00:08:22.450 }, 00:08:22.450 { 00:08:22.450 "name": "BaseBdev2", 00:08:22.450 "uuid": "27838a81-acb0-48db-8a8b-7f37ec14a32a", 00:08:22.450 "is_configured": true, 00:08:22.450 "data_offset": 2048, 00:08:22.450 "data_size": 63488 00:08:22.450 }, 00:08:22.450 { 00:08:22.450 "name": "BaseBdev3", 00:08:22.450 "uuid": "ef746ec2-e56b-4e3f-a564-6615ffff137f", 00:08:22.450 "is_configured": true, 00:08:22.450 "data_offset": 2048, 00:08:22.450 "data_size": 63488 00:08:22.450 }, 00:08:22.450 { 00:08:22.450 "name": "BaseBdev4", 00:08:22.450 "uuid": "786a5d6f-56a9-483b-b57f-4c6865f4cdf5", 00:08:22.450 "is_configured": true, 00:08:22.450 "data_offset": 2048, 00:08:22.450 "data_size": 63488 00:08:22.450 } 00:08:22.450 ] 00:08:22.450 }' 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.450 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.713 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:22.713 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.713 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.974 [2024-10-01 14:33:14.394828] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:22.974 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.974 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:22.974 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.975 "name": "Existed_Raid", 00:08:22.975 "uuid": "7d6eee76-b9a2-4f74-b94d-72735363ad50", 00:08:22.975 "strip_size_kb": 64, 00:08:22.975 "state": "configuring", 00:08:22.975 "raid_level": "raid0", 00:08:22.975 "superblock": true, 00:08:22.975 "num_base_bdevs": 4, 00:08:22.975 "num_base_bdevs_discovered": 2, 00:08:22.975 "num_base_bdevs_operational": 4, 00:08:22.975 "base_bdevs_list": [ 00:08:22.975 { 00:08:22.975 "name": "BaseBdev1", 00:08:22.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.975 "is_configured": false, 00:08:22.975 "data_offset": 0, 00:08:22.975 "data_size": 0 00:08:22.975 }, 00:08:22.975 { 00:08:22.975 "name": null, 00:08:22.975 "uuid": "27838a81-acb0-48db-8a8b-7f37ec14a32a", 00:08:22.975 "is_configured": false, 00:08:22.975 "data_offset": 0, 00:08:22.975 "data_size": 63488 00:08:22.975 }, 00:08:22.975 { 00:08:22.975 "name": "BaseBdev3", 00:08:22.975 "uuid": "ef746ec2-e56b-4e3f-a564-6615ffff137f", 00:08:22.975 "is_configured": true, 00:08:22.975 "data_offset": 2048, 00:08:22.975 "data_size": 63488 00:08:22.975 }, 00:08:22.975 { 00:08:22.975 "name": "BaseBdev4", 00:08:22.975 "uuid": "786a5d6f-56a9-483b-b57f-4c6865f4cdf5", 00:08:22.975 "is_configured": true, 00:08:22.975 "data_offset": 2048, 00:08:22.975 "data_size": 63488 00:08:22.975 } 00:08:22.975 ] 00:08:22.975 }' 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.975 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.236 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.237 [2024-10-01 14:33:14.777893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.237 BaseBdev1 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.237 [ 00:08:23.237 { 00:08:23.237 "name": "BaseBdev1", 00:08:23.237 "aliases": [ 00:08:23.237 "ecf0cefc-73d8-41cf-b64d-32ecd7690171" 00:08:23.237 ], 00:08:23.237 "product_name": "Malloc disk", 00:08:23.237 "block_size": 512, 00:08:23.237 "num_blocks": 65536, 00:08:23.237 "uuid": "ecf0cefc-73d8-41cf-b64d-32ecd7690171", 00:08:23.237 "assigned_rate_limits": { 00:08:23.237 "rw_ios_per_sec": 0, 00:08:23.237 "rw_mbytes_per_sec": 0, 00:08:23.237 "r_mbytes_per_sec": 0, 00:08:23.237 "w_mbytes_per_sec": 0 00:08:23.237 }, 00:08:23.237 "claimed": true, 00:08:23.237 "claim_type": "exclusive_write", 00:08:23.237 "zoned": false, 00:08:23.237 "supported_io_types": { 00:08:23.237 "read": true, 00:08:23.237 "write": true, 00:08:23.237 "unmap": true, 00:08:23.237 "flush": true, 00:08:23.237 "reset": true, 00:08:23.237 "nvme_admin": false, 00:08:23.237 "nvme_io": false, 00:08:23.237 "nvme_io_md": false, 00:08:23.237 "write_zeroes": true, 00:08:23.237 "zcopy": true, 00:08:23.237 "get_zone_info": false, 00:08:23.237 "zone_management": false, 00:08:23.237 "zone_append": false, 00:08:23.237 "compare": false, 00:08:23.237 "compare_and_write": false, 00:08:23.237 "abort": true, 00:08:23.237 "seek_hole": false, 00:08:23.237 "seek_data": false, 00:08:23.237 "copy": true, 00:08:23.237 "nvme_iov_md": false 00:08:23.237 }, 00:08:23.237 "memory_domains": [ 00:08:23.237 { 00:08:23.237 "dma_device_id": "system", 00:08:23.237 "dma_device_type": 1 00:08:23.237 }, 00:08:23.237 { 00:08:23.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.237 "dma_device_type": 2 00:08:23.237 } 00:08:23.237 ], 00:08:23.237 "driver_specific": {} 00:08:23.237 } 00:08:23.237 ] 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.237 "name": "Existed_Raid", 00:08:23.237 "uuid": "7d6eee76-b9a2-4f74-b94d-72735363ad50", 00:08:23.237 "strip_size_kb": 64, 00:08:23.237 "state": "configuring", 00:08:23.237 "raid_level": "raid0", 00:08:23.237 "superblock": true, 00:08:23.237 "num_base_bdevs": 4, 00:08:23.237 "num_base_bdevs_discovered": 3, 00:08:23.237 "num_base_bdevs_operational": 4, 00:08:23.237 "base_bdevs_list": [ 00:08:23.237 { 00:08:23.237 "name": "BaseBdev1", 00:08:23.237 "uuid": "ecf0cefc-73d8-41cf-b64d-32ecd7690171", 00:08:23.237 "is_configured": true, 00:08:23.237 "data_offset": 2048, 00:08:23.237 "data_size": 63488 00:08:23.237 }, 00:08:23.237 { 00:08:23.237 "name": null, 00:08:23.237 "uuid": "27838a81-acb0-48db-8a8b-7f37ec14a32a", 00:08:23.237 "is_configured": false, 00:08:23.237 "data_offset": 0, 00:08:23.237 "data_size": 63488 00:08:23.237 }, 00:08:23.237 { 00:08:23.237 "name": "BaseBdev3", 00:08:23.237 "uuid": "ef746ec2-e56b-4e3f-a564-6615ffff137f", 00:08:23.237 "is_configured": true, 00:08:23.237 "data_offset": 2048, 00:08:23.237 "data_size": 63488 00:08:23.237 }, 00:08:23.237 { 00:08:23.237 "name": "BaseBdev4", 00:08:23.237 "uuid": "786a5d6f-56a9-483b-b57f-4c6865f4cdf5", 00:08:23.237 "is_configured": true, 00:08:23.237 "data_offset": 2048, 00:08:23.237 "data_size": 63488 00:08:23.237 } 00:08:23.237 ] 00:08:23.237 }' 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.237 14:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.500 [2024-10-01 14:33:15.158053] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.500 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.761 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.761 "name": "Existed_Raid", 00:08:23.761 "uuid": "7d6eee76-b9a2-4f74-b94d-72735363ad50", 00:08:23.761 "strip_size_kb": 64, 00:08:23.761 "state": "configuring", 00:08:23.761 "raid_level": "raid0", 00:08:23.761 "superblock": true, 00:08:23.761 "num_base_bdevs": 4, 00:08:23.761 "num_base_bdevs_discovered": 2, 00:08:23.761 "num_base_bdevs_operational": 4, 00:08:23.761 "base_bdevs_list": [ 00:08:23.761 { 00:08:23.761 "name": "BaseBdev1", 00:08:23.761 "uuid": "ecf0cefc-73d8-41cf-b64d-32ecd7690171", 00:08:23.761 "is_configured": true, 00:08:23.761 "data_offset": 2048, 00:08:23.761 "data_size": 63488 00:08:23.761 }, 00:08:23.761 { 00:08:23.761 "name": null, 00:08:23.761 "uuid": "27838a81-acb0-48db-8a8b-7f37ec14a32a", 00:08:23.761 "is_configured": false, 00:08:23.761 "data_offset": 0, 00:08:23.761 "data_size": 63488 00:08:23.761 }, 00:08:23.761 { 00:08:23.761 "name": null, 00:08:23.761 "uuid": "ef746ec2-e56b-4e3f-a564-6615ffff137f", 00:08:23.761 "is_configured": false, 00:08:23.761 "data_offset": 0, 00:08:23.761 "data_size": 63488 00:08:23.761 }, 00:08:23.761 { 00:08:23.761 "name": "BaseBdev4", 00:08:23.761 "uuid": "786a5d6f-56a9-483b-b57f-4c6865f4cdf5", 00:08:23.761 "is_configured": true, 00:08:23.761 "data_offset": 2048, 00:08:23.761 "data_size": 63488 00:08:23.761 } 00:08:23.761 ] 00:08:23.761 }' 00:08:23.761 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.761 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.022 [2024-10-01 14:33:15.506162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.022 "name": "Existed_Raid", 00:08:24.022 "uuid": "7d6eee76-b9a2-4f74-b94d-72735363ad50", 00:08:24.022 "strip_size_kb": 64, 00:08:24.022 "state": "configuring", 00:08:24.022 "raid_level": "raid0", 00:08:24.022 "superblock": true, 00:08:24.022 "num_base_bdevs": 4, 00:08:24.022 "num_base_bdevs_discovered": 3, 00:08:24.022 "num_base_bdevs_operational": 4, 00:08:24.022 "base_bdevs_list": [ 00:08:24.022 { 00:08:24.022 "name": "BaseBdev1", 00:08:24.022 "uuid": "ecf0cefc-73d8-41cf-b64d-32ecd7690171", 00:08:24.022 "is_configured": true, 00:08:24.022 "data_offset": 2048, 00:08:24.022 "data_size": 63488 00:08:24.022 }, 00:08:24.022 { 00:08:24.022 "name": null, 00:08:24.022 "uuid": "27838a81-acb0-48db-8a8b-7f37ec14a32a", 00:08:24.022 "is_configured": false, 00:08:24.022 "data_offset": 0, 00:08:24.022 "data_size": 63488 00:08:24.022 }, 00:08:24.022 { 00:08:24.022 "name": "BaseBdev3", 00:08:24.022 "uuid": "ef746ec2-e56b-4e3f-a564-6615ffff137f", 00:08:24.022 "is_configured": true, 00:08:24.022 "data_offset": 2048, 00:08:24.022 "data_size": 63488 00:08:24.022 }, 00:08:24.022 { 00:08:24.022 "name": "BaseBdev4", 00:08:24.022 "uuid": "786a5d6f-56a9-483b-b57f-4c6865f4cdf5", 00:08:24.022 "is_configured": true, 00:08:24.022 "data_offset": 2048, 00:08:24.022 "data_size": 63488 00:08:24.022 } 00:08:24.022 ] 00:08:24.022 }' 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.022 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.283 [2024-10-01 14:33:15.858264] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.283 "name": "Existed_Raid", 00:08:24.283 "uuid": "7d6eee76-b9a2-4f74-b94d-72735363ad50", 00:08:24.283 "strip_size_kb": 64, 00:08:24.283 "state": "configuring", 00:08:24.283 "raid_level": "raid0", 00:08:24.283 "superblock": true, 00:08:24.283 "num_base_bdevs": 4, 00:08:24.283 "num_base_bdevs_discovered": 2, 00:08:24.283 "num_base_bdevs_operational": 4, 00:08:24.283 "base_bdevs_list": [ 00:08:24.283 { 00:08:24.283 "name": null, 00:08:24.283 "uuid": "ecf0cefc-73d8-41cf-b64d-32ecd7690171", 00:08:24.283 "is_configured": false, 00:08:24.283 "data_offset": 0, 00:08:24.283 "data_size": 63488 00:08:24.283 }, 00:08:24.283 { 00:08:24.283 "name": null, 00:08:24.283 "uuid": "27838a81-acb0-48db-8a8b-7f37ec14a32a", 00:08:24.283 "is_configured": false, 00:08:24.283 "data_offset": 0, 00:08:24.283 "data_size": 63488 00:08:24.283 }, 00:08:24.283 { 00:08:24.283 "name": "BaseBdev3", 00:08:24.283 "uuid": "ef746ec2-e56b-4e3f-a564-6615ffff137f", 00:08:24.283 "is_configured": true, 00:08:24.283 "data_offset": 2048, 00:08:24.283 "data_size": 63488 00:08:24.283 }, 00:08:24.283 { 00:08:24.283 "name": "BaseBdev4", 00:08:24.283 "uuid": "786a5d6f-56a9-483b-b57f-4c6865f4cdf5", 00:08:24.283 "is_configured": true, 00:08:24.283 "data_offset": 2048, 00:08:24.283 "data_size": 63488 00:08:24.283 } 00:08:24.283 ] 00:08:24.283 }' 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.283 14:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.856 [2024-10-01 14:33:16.293762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.856 "name": "Existed_Raid", 00:08:24.856 "uuid": "7d6eee76-b9a2-4f74-b94d-72735363ad50", 00:08:24.856 "strip_size_kb": 64, 00:08:24.856 "state": "configuring", 00:08:24.856 "raid_level": "raid0", 00:08:24.856 "superblock": true, 00:08:24.856 "num_base_bdevs": 4, 00:08:24.856 "num_base_bdevs_discovered": 3, 00:08:24.856 "num_base_bdevs_operational": 4, 00:08:24.856 "base_bdevs_list": [ 00:08:24.856 { 00:08:24.856 "name": null, 00:08:24.856 "uuid": "ecf0cefc-73d8-41cf-b64d-32ecd7690171", 00:08:24.856 "is_configured": false, 00:08:24.856 "data_offset": 0, 00:08:24.856 "data_size": 63488 00:08:24.856 }, 00:08:24.856 { 00:08:24.856 "name": "BaseBdev2", 00:08:24.856 "uuid": "27838a81-acb0-48db-8a8b-7f37ec14a32a", 00:08:24.856 "is_configured": true, 00:08:24.856 "data_offset": 2048, 00:08:24.856 "data_size": 63488 00:08:24.856 }, 00:08:24.856 { 00:08:24.856 "name": "BaseBdev3", 00:08:24.856 "uuid": "ef746ec2-e56b-4e3f-a564-6615ffff137f", 00:08:24.856 "is_configured": true, 00:08:24.856 "data_offset": 2048, 00:08:24.856 "data_size": 63488 00:08:24.856 }, 00:08:24.856 { 00:08:24.856 "name": "BaseBdev4", 00:08:24.856 "uuid": "786a5d6f-56a9-483b-b57f-4c6865f4cdf5", 00:08:24.856 "is_configured": true, 00:08:24.856 "data_offset": 2048, 00:08:24.856 "data_size": 63488 00:08:24.856 } 00:08:24.856 ] 00:08:24.856 }' 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.856 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.117 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.117 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:25.117 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.117 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.117 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.117 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:25.117 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:25.117 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.117 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.117 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.117 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ecf0cefc-73d8-41cf-b64d-32ecd7690171 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.118 [2024-10-01 14:33:16.696246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:25.118 [2024-10-01 14:33:16.696603] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:25.118 [2024-10-01 14:33:16.696621] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:25.118 [2024-10-01 14:33:16.696887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:25.118 [2024-10-01 14:33:16.697007] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:25.118 [2024-10-01 14:33:16.697018] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:25.118 [2024-10-01 14:33:16.697129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.118 NewBaseBdev 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.118 [ 00:08:25.118 { 00:08:25.118 "name": "NewBaseBdev", 00:08:25.118 "aliases": [ 00:08:25.118 "ecf0cefc-73d8-41cf-b64d-32ecd7690171" 00:08:25.118 ], 00:08:25.118 "product_name": "Malloc disk", 00:08:25.118 "block_size": 512, 00:08:25.118 "num_blocks": 65536, 00:08:25.118 "uuid": "ecf0cefc-73d8-41cf-b64d-32ecd7690171", 00:08:25.118 "assigned_rate_limits": { 00:08:25.118 "rw_ios_per_sec": 0, 00:08:25.118 "rw_mbytes_per_sec": 0, 00:08:25.118 "r_mbytes_per_sec": 0, 00:08:25.118 "w_mbytes_per_sec": 0 00:08:25.118 }, 00:08:25.118 "claimed": true, 00:08:25.118 "claim_type": "exclusive_write", 00:08:25.118 "zoned": false, 00:08:25.118 "supported_io_types": { 00:08:25.118 "read": true, 00:08:25.118 "write": true, 00:08:25.118 "unmap": true, 00:08:25.118 "flush": true, 00:08:25.118 "reset": true, 00:08:25.118 "nvme_admin": false, 00:08:25.118 "nvme_io": false, 00:08:25.118 "nvme_io_md": false, 00:08:25.118 "write_zeroes": true, 00:08:25.118 "zcopy": true, 00:08:25.118 "get_zone_info": false, 00:08:25.118 "zone_management": false, 00:08:25.118 "zone_append": false, 00:08:25.118 "compare": false, 00:08:25.118 "compare_and_write": false, 00:08:25.118 "abort": true, 00:08:25.118 "seek_hole": false, 00:08:25.118 "seek_data": false, 00:08:25.118 "copy": true, 00:08:25.118 "nvme_iov_md": false 00:08:25.118 }, 00:08:25.118 "memory_domains": [ 00:08:25.118 { 00:08:25.118 "dma_device_id": "system", 00:08:25.118 "dma_device_type": 1 00:08:25.118 }, 00:08:25.118 { 00:08:25.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.118 "dma_device_type": 2 00:08:25.118 } 00:08:25.118 ], 00:08:25.118 "driver_specific": {} 00:08:25.118 } 00:08:25.118 ] 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.118 "name": "Existed_Raid", 00:08:25.118 "uuid": "7d6eee76-b9a2-4f74-b94d-72735363ad50", 00:08:25.118 "strip_size_kb": 64, 00:08:25.118 "state": "online", 00:08:25.118 "raid_level": "raid0", 00:08:25.118 "superblock": true, 00:08:25.118 "num_base_bdevs": 4, 00:08:25.118 "num_base_bdevs_discovered": 4, 00:08:25.118 "num_base_bdevs_operational": 4, 00:08:25.118 "base_bdevs_list": [ 00:08:25.118 { 00:08:25.118 "name": "NewBaseBdev", 00:08:25.118 "uuid": "ecf0cefc-73d8-41cf-b64d-32ecd7690171", 00:08:25.118 "is_configured": true, 00:08:25.118 "data_offset": 2048, 00:08:25.118 "data_size": 63488 00:08:25.118 }, 00:08:25.118 { 00:08:25.118 "name": "BaseBdev2", 00:08:25.118 "uuid": "27838a81-acb0-48db-8a8b-7f37ec14a32a", 00:08:25.118 "is_configured": true, 00:08:25.118 "data_offset": 2048, 00:08:25.118 "data_size": 63488 00:08:25.118 }, 00:08:25.118 { 00:08:25.118 "name": "BaseBdev3", 00:08:25.118 "uuid": "ef746ec2-e56b-4e3f-a564-6615ffff137f", 00:08:25.118 "is_configured": true, 00:08:25.118 "data_offset": 2048, 00:08:25.118 "data_size": 63488 00:08:25.118 }, 00:08:25.118 { 00:08:25.118 "name": "BaseBdev4", 00:08:25.118 "uuid": "786a5d6f-56a9-483b-b57f-4c6865f4cdf5", 00:08:25.118 "is_configured": true, 00:08:25.118 "data_offset": 2048, 00:08:25.118 "data_size": 63488 00:08:25.118 } 00:08:25.118 ] 00:08:25.118 }' 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.118 14:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.380 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:25.380 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:25.380 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:25.380 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:25.380 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:25.380 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:25.380 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:25.380 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.380 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.380 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:25.640 [2024-10-01 14:33:17.064766] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.640 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.640 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:25.640 "name": "Existed_Raid", 00:08:25.640 "aliases": [ 00:08:25.640 "7d6eee76-b9a2-4f74-b94d-72735363ad50" 00:08:25.640 ], 00:08:25.640 "product_name": "Raid Volume", 00:08:25.640 "block_size": 512, 00:08:25.640 "num_blocks": 253952, 00:08:25.640 "uuid": "7d6eee76-b9a2-4f74-b94d-72735363ad50", 00:08:25.640 "assigned_rate_limits": { 00:08:25.640 "rw_ios_per_sec": 0, 00:08:25.640 "rw_mbytes_per_sec": 0, 00:08:25.640 "r_mbytes_per_sec": 0, 00:08:25.640 "w_mbytes_per_sec": 0 00:08:25.640 }, 00:08:25.640 "claimed": false, 00:08:25.640 "zoned": false, 00:08:25.640 "supported_io_types": { 00:08:25.640 "read": true, 00:08:25.640 "write": true, 00:08:25.640 "unmap": true, 00:08:25.640 "flush": true, 00:08:25.640 "reset": true, 00:08:25.640 "nvme_admin": false, 00:08:25.640 "nvme_io": false, 00:08:25.640 "nvme_io_md": false, 00:08:25.640 "write_zeroes": true, 00:08:25.640 "zcopy": false, 00:08:25.640 "get_zone_info": false, 00:08:25.640 "zone_management": false, 00:08:25.641 "zone_append": false, 00:08:25.641 "compare": false, 00:08:25.641 "compare_and_write": false, 00:08:25.641 "abort": false, 00:08:25.641 "seek_hole": false, 00:08:25.641 "seek_data": false, 00:08:25.641 "copy": false, 00:08:25.641 "nvme_iov_md": false 00:08:25.641 }, 00:08:25.641 "memory_domains": [ 00:08:25.641 { 00:08:25.641 "dma_device_id": "system", 00:08:25.641 "dma_device_type": 1 00:08:25.641 }, 00:08:25.641 { 00:08:25.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.641 "dma_device_type": 2 00:08:25.641 }, 00:08:25.641 { 00:08:25.641 "dma_device_id": "system", 00:08:25.641 "dma_device_type": 1 00:08:25.641 }, 00:08:25.641 { 00:08:25.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.641 "dma_device_type": 2 00:08:25.641 }, 00:08:25.641 { 00:08:25.641 "dma_device_id": "system", 00:08:25.641 "dma_device_type": 1 00:08:25.641 }, 00:08:25.641 { 00:08:25.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.641 "dma_device_type": 2 00:08:25.641 }, 00:08:25.641 { 00:08:25.641 "dma_device_id": "system", 00:08:25.641 "dma_device_type": 1 00:08:25.641 }, 00:08:25.641 { 00:08:25.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.641 "dma_device_type": 2 00:08:25.641 } 00:08:25.641 ], 00:08:25.641 "driver_specific": { 00:08:25.641 "raid": { 00:08:25.641 "uuid": "7d6eee76-b9a2-4f74-b94d-72735363ad50", 00:08:25.641 "strip_size_kb": 64, 00:08:25.641 "state": "online", 00:08:25.641 "raid_level": "raid0", 00:08:25.641 "superblock": true, 00:08:25.641 "num_base_bdevs": 4, 00:08:25.641 "num_base_bdevs_discovered": 4, 00:08:25.641 "num_base_bdevs_operational": 4, 00:08:25.641 "base_bdevs_list": [ 00:08:25.641 { 00:08:25.641 "name": "NewBaseBdev", 00:08:25.641 "uuid": "ecf0cefc-73d8-41cf-b64d-32ecd7690171", 00:08:25.641 "is_configured": true, 00:08:25.641 "data_offset": 2048, 00:08:25.641 "data_size": 63488 00:08:25.641 }, 00:08:25.641 { 00:08:25.641 "name": "BaseBdev2", 00:08:25.641 "uuid": "27838a81-acb0-48db-8a8b-7f37ec14a32a", 00:08:25.641 "is_configured": true, 00:08:25.641 "data_offset": 2048, 00:08:25.641 "data_size": 63488 00:08:25.641 }, 00:08:25.641 { 00:08:25.641 "name": "BaseBdev3", 00:08:25.641 "uuid": "ef746ec2-e56b-4e3f-a564-6615ffff137f", 00:08:25.641 "is_configured": true, 00:08:25.641 "data_offset": 2048, 00:08:25.641 "data_size": 63488 00:08:25.641 }, 00:08:25.641 { 00:08:25.641 "name": "BaseBdev4", 00:08:25.641 "uuid": "786a5d6f-56a9-483b-b57f-4c6865f4cdf5", 00:08:25.641 "is_configured": true, 00:08:25.641 "data_offset": 2048, 00:08:25.641 "data_size": 63488 00:08:25.641 } 00:08:25.641 ] 00:08:25.641 } 00:08:25.641 } 00:08:25.641 }' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:25.641 BaseBdev2 00:08:25.641 BaseBdev3 00:08:25.641 BaseBdev4' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.641 [2024-10-01 14:33:17.292448] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.641 [2024-10-01 14:33:17.292475] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.641 [2024-10-01 14:33:17.292545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.641 [2024-10-01 14:33:17.292609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.641 [2024-10-01 14:33:17.292619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68508 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 68508 ']' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 68508 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68508 00:08:25.641 killing process with pid 68508 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68508' 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 68508 00:08:25.641 [2024-10-01 14:33:17.319036] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.641 14:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 68508 00:08:25.903 [2024-10-01 14:33:17.566550] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.847 14:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:26.847 00:08:26.847 real 0m8.586s 00:08:26.847 user 0m13.570s 00:08:26.847 sys 0m1.354s 00:08:26.847 14:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.847 14:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.847 ************************************ 00:08:26.847 END TEST raid_state_function_test_sb 00:08:26.847 ************************************ 00:08:26.847 14:33:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:08:26.847 14:33:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:26.847 14:33:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.847 14:33:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.847 ************************************ 00:08:26.847 START TEST raid_superblock_test 00:08:26.847 ************************************ 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69151 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69151 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 69151 ']' 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.847 14:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.848 14:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:26.848 14:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.848 14:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.848 14:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.848 [2024-10-01 14:33:18.518211] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:26.848 [2024-10-01 14:33:18.519135] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69151 ] 00:08:27.114 [2024-10-01 14:33:18.672530] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.378 [2024-10-01 14:33:18.861623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.378 [2024-10-01 14:33:18.997645] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.378 [2024-10-01 14:33:18.997673] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.949 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.949 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:27.949 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:27.949 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.949 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:27.949 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:27.949 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:27.949 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 malloc1 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 [2024-10-01 14:33:19.419282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:27.950 [2024-10-01 14:33:19.419340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.950 [2024-10-01 14:33:19.419360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:27.950 [2024-10-01 14:33:19.419371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.950 [2024-10-01 14:33:19.421513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.950 [2024-10-01 14:33:19.421651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:27.950 pt1 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 malloc2 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 [2024-10-01 14:33:19.478183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:27.950 [2024-10-01 14:33:19.478240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.950 [2024-10-01 14:33:19.478263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:27.950 [2024-10-01 14:33:19.478272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.950 [2024-10-01 14:33:19.480393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.950 [2024-10-01 14:33:19.480531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:27.950 pt2 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 malloc3 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 [2024-10-01 14:33:19.514155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:27.950 [2024-10-01 14:33:19.514198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.950 [2024-10-01 14:33:19.514218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:27.950 [2024-10-01 14:33:19.514226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.950 [2024-10-01 14:33:19.516309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.950 [2024-10-01 14:33:19.516340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:27.950 pt3 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 malloc4 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 [2024-10-01 14:33:19.558044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:08:27.950 [2024-10-01 14:33:19.558186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.950 [2024-10-01 14:33:19.558210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:27.950 [2024-10-01 14:33:19.558219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.950 [2024-10-01 14:33:19.560316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.950 [2024-10-01 14:33:19.560349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:08:27.950 pt4 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 [2024-10-01 14:33:19.570100] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:27.950 [2024-10-01 14:33:19.571907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:27.950 [2024-10-01 14:33:19.571969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:27.950 [2024-10-01 14:33:19.572031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:08:27.950 [2024-10-01 14:33:19.572209] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:27.950 [2024-10-01 14:33:19.572225] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:27.950 [2024-10-01 14:33:19.572477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:27.950 [2024-10-01 14:33:19.572615] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:27.950 [2024-10-01 14:33:19.572627] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:27.950 [2024-10-01 14:33:19.572778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.950 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.951 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.951 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.951 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.951 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.951 "name": "raid_bdev1", 00:08:27.951 "uuid": "09a83a47-f1ac-4895-b829-d33df01fad1e", 00:08:27.951 "strip_size_kb": 64, 00:08:27.951 "state": "online", 00:08:27.951 "raid_level": "raid0", 00:08:27.951 "superblock": true, 00:08:27.951 "num_base_bdevs": 4, 00:08:27.951 "num_base_bdevs_discovered": 4, 00:08:27.951 "num_base_bdevs_operational": 4, 00:08:27.951 "base_bdevs_list": [ 00:08:27.951 { 00:08:27.951 "name": "pt1", 00:08:27.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.951 "is_configured": true, 00:08:27.951 "data_offset": 2048, 00:08:27.951 "data_size": 63488 00:08:27.951 }, 00:08:27.951 { 00:08:27.951 "name": "pt2", 00:08:27.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.951 "is_configured": true, 00:08:27.951 "data_offset": 2048, 00:08:27.951 "data_size": 63488 00:08:27.951 }, 00:08:27.951 { 00:08:27.951 "name": "pt3", 00:08:27.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:27.951 "is_configured": true, 00:08:27.951 "data_offset": 2048, 00:08:27.951 "data_size": 63488 00:08:27.951 }, 00:08:27.951 { 00:08:27.951 "name": "pt4", 00:08:27.951 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:27.951 "is_configured": true, 00:08:27.951 "data_offset": 2048, 00:08:27.951 "data_size": 63488 00:08:27.951 } 00:08:27.951 ] 00:08:27.951 }' 00:08:27.951 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.951 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.523 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:28.523 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:28.523 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:28.523 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:28.523 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:28.523 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:28.523 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.523 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:28.523 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.523 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.523 [2024-10-01 14:33:19.906513] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.523 14:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.523 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.523 "name": "raid_bdev1", 00:08:28.523 "aliases": [ 00:08:28.523 "09a83a47-f1ac-4895-b829-d33df01fad1e" 00:08:28.523 ], 00:08:28.523 "product_name": "Raid Volume", 00:08:28.523 "block_size": 512, 00:08:28.523 "num_blocks": 253952, 00:08:28.523 "uuid": "09a83a47-f1ac-4895-b829-d33df01fad1e", 00:08:28.523 "assigned_rate_limits": { 00:08:28.523 "rw_ios_per_sec": 0, 00:08:28.523 "rw_mbytes_per_sec": 0, 00:08:28.523 "r_mbytes_per_sec": 0, 00:08:28.523 "w_mbytes_per_sec": 0 00:08:28.523 }, 00:08:28.523 "claimed": false, 00:08:28.523 "zoned": false, 00:08:28.523 "supported_io_types": { 00:08:28.523 "read": true, 00:08:28.523 "write": true, 00:08:28.523 "unmap": true, 00:08:28.523 "flush": true, 00:08:28.523 "reset": true, 00:08:28.523 "nvme_admin": false, 00:08:28.523 "nvme_io": false, 00:08:28.523 "nvme_io_md": false, 00:08:28.523 "write_zeroes": true, 00:08:28.524 "zcopy": false, 00:08:28.524 "get_zone_info": false, 00:08:28.524 "zone_management": false, 00:08:28.524 "zone_append": false, 00:08:28.524 "compare": false, 00:08:28.524 "compare_and_write": false, 00:08:28.524 "abort": false, 00:08:28.524 "seek_hole": false, 00:08:28.524 "seek_data": false, 00:08:28.524 "copy": false, 00:08:28.524 "nvme_iov_md": false 00:08:28.524 }, 00:08:28.524 "memory_domains": [ 00:08:28.524 { 00:08:28.524 "dma_device_id": "system", 00:08:28.524 "dma_device_type": 1 00:08:28.524 }, 00:08:28.524 { 00:08:28.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.524 "dma_device_type": 2 00:08:28.524 }, 00:08:28.524 { 00:08:28.524 "dma_device_id": "system", 00:08:28.524 "dma_device_type": 1 00:08:28.524 }, 00:08:28.524 { 00:08:28.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.524 "dma_device_type": 2 00:08:28.524 }, 00:08:28.524 { 00:08:28.524 "dma_device_id": "system", 00:08:28.524 "dma_device_type": 1 00:08:28.524 }, 00:08:28.524 { 00:08:28.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.524 "dma_device_type": 2 00:08:28.524 }, 00:08:28.524 { 00:08:28.524 "dma_device_id": "system", 00:08:28.524 "dma_device_type": 1 00:08:28.524 }, 00:08:28.524 { 00:08:28.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.524 "dma_device_type": 2 00:08:28.524 } 00:08:28.524 ], 00:08:28.524 "driver_specific": { 00:08:28.524 "raid": { 00:08:28.524 "uuid": "09a83a47-f1ac-4895-b829-d33df01fad1e", 00:08:28.524 "strip_size_kb": 64, 00:08:28.524 "state": "online", 00:08:28.524 "raid_level": "raid0", 00:08:28.524 "superblock": true, 00:08:28.524 "num_base_bdevs": 4, 00:08:28.524 "num_base_bdevs_discovered": 4, 00:08:28.524 "num_base_bdevs_operational": 4, 00:08:28.524 "base_bdevs_list": [ 00:08:28.524 { 00:08:28.524 "name": "pt1", 00:08:28.524 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.524 "is_configured": true, 00:08:28.524 "data_offset": 2048, 00:08:28.524 "data_size": 63488 00:08:28.524 }, 00:08:28.524 { 00:08:28.524 "name": "pt2", 00:08:28.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.524 "is_configured": true, 00:08:28.524 "data_offset": 2048, 00:08:28.524 "data_size": 63488 00:08:28.524 }, 00:08:28.524 { 00:08:28.524 "name": "pt3", 00:08:28.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:28.524 "is_configured": true, 00:08:28.524 "data_offset": 2048, 00:08:28.524 "data_size": 63488 00:08:28.524 }, 00:08:28.524 { 00:08:28.524 "name": "pt4", 00:08:28.524 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:28.524 "is_configured": true, 00:08:28.524 "data_offset": 2048, 00:08:28.524 "data_size": 63488 00:08:28.524 } 00:08:28.524 ] 00:08:28.524 } 00:08:28.524 } 00:08:28.524 }' 00:08:28.524 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.524 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:28.524 pt2 00:08:28.524 pt3 00:08:28.524 pt4' 00:08:28.524 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.524 14:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.524 [2024-10-01 14:33:20.150539] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=09a83a47-f1ac-4895-b829-d33df01fad1e 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 09a83a47-f1ac-4895-b829-d33df01fad1e ']' 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.524 [2024-10-01 14:33:20.178195] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.524 [2024-10-01 14:33:20.178299] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.524 [2024-10-01 14:33:20.178417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.524 [2024-10-01 14:33:20.178511] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.524 [2024-10-01 14:33:20.178555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.524 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.525 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.525 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.525 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:28.525 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.785 [2024-10-01 14:33:20.290239] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:28.785 [2024-10-01 14:33:20.292201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:28.785 [2024-10-01 14:33:20.292334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:28.785 [2024-10-01 14:33:20.292417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:08:28.785 [2024-10-01 14:33:20.292485] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:28.785 [2024-10-01 14:33:20.292596] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:28.785 [2024-10-01 14:33:20.292764] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:28.785 [2024-10-01 14:33:20.292853] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:08:28.785 [2024-10-01 14:33:20.292915] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.785 [2024-10-01 14:33:20.292963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:28.785 request: 00:08:28.785 { 00:08:28.785 "name": "raid_bdev1", 00:08:28.785 "raid_level": "raid0", 00:08:28.785 "base_bdevs": [ 00:08:28.785 "malloc1", 00:08:28.785 "malloc2", 00:08:28.785 "malloc3", 00:08:28.785 "malloc4" 00:08:28.785 ], 00:08:28.785 "strip_size_kb": 64, 00:08:28.785 "superblock": false, 00:08:28.785 "method": "bdev_raid_create", 00:08:28.785 "req_id": 1 00:08:28.785 } 00:08:28.785 Got JSON-RPC error response 00:08:28.785 response: 00:08:28.785 { 00:08:28.785 "code": -17, 00:08:28.785 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:28.785 } 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.785 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.785 [2024-10-01 14:33:20.334241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:28.785 [2024-10-01 14:33:20.334299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.785 [2024-10-01 14:33:20.334316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:28.785 [2024-10-01 14:33:20.334327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.785 [2024-10-01 14:33:20.336528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.785 [2024-10-01 14:33:20.336569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:28.785 [2024-10-01 14:33:20.336646] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:28.785 [2024-10-01 14:33:20.336702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:28.785 pt1 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.786 "name": "raid_bdev1", 00:08:28.786 "uuid": "09a83a47-f1ac-4895-b829-d33df01fad1e", 00:08:28.786 "strip_size_kb": 64, 00:08:28.786 "state": "configuring", 00:08:28.786 "raid_level": "raid0", 00:08:28.786 "superblock": true, 00:08:28.786 "num_base_bdevs": 4, 00:08:28.786 "num_base_bdevs_discovered": 1, 00:08:28.786 "num_base_bdevs_operational": 4, 00:08:28.786 "base_bdevs_list": [ 00:08:28.786 { 00:08:28.786 "name": "pt1", 00:08:28.786 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.786 "is_configured": true, 00:08:28.786 "data_offset": 2048, 00:08:28.786 "data_size": 63488 00:08:28.786 }, 00:08:28.786 { 00:08:28.786 "name": null, 00:08:28.786 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.786 "is_configured": false, 00:08:28.786 "data_offset": 2048, 00:08:28.786 "data_size": 63488 00:08:28.786 }, 00:08:28.786 { 00:08:28.786 "name": null, 00:08:28.786 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:28.786 "is_configured": false, 00:08:28.786 "data_offset": 2048, 00:08:28.786 "data_size": 63488 00:08:28.786 }, 00:08:28.786 { 00:08:28.786 "name": null, 00:08:28.786 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:28.786 "is_configured": false, 00:08:28.786 "data_offset": 2048, 00:08:28.786 "data_size": 63488 00:08:28.786 } 00:08:28.786 ] 00:08:28.786 }' 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.786 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.045 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:08:29.045 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:29.045 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.045 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.045 [2024-10-01 14:33:20.670329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:29.045 [2024-10-01 14:33:20.670397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.045 [2024-10-01 14:33:20.670417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:08:29.045 [2024-10-01 14:33:20.670428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.045 [2024-10-01 14:33:20.670860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.045 [2024-10-01 14:33:20.670878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:29.045 [2024-10-01 14:33:20.670947] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:29.045 [2024-10-01 14:33:20.670968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.045 pt2 00:08:29.045 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.045 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:29.045 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.045 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.045 [2024-10-01 14:33:20.678344] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.046 "name": "raid_bdev1", 00:08:29.046 "uuid": "09a83a47-f1ac-4895-b829-d33df01fad1e", 00:08:29.046 "strip_size_kb": 64, 00:08:29.046 "state": "configuring", 00:08:29.046 "raid_level": "raid0", 00:08:29.046 "superblock": true, 00:08:29.046 "num_base_bdevs": 4, 00:08:29.046 "num_base_bdevs_discovered": 1, 00:08:29.046 "num_base_bdevs_operational": 4, 00:08:29.046 "base_bdevs_list": [ 00:08:29.046 { 00:08:29.046 "name": "pt1", 00:08:29.046 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.046 "is_configured": true, 00:08:29.046 "data_offset": 2048, 00:08:29.046 "data_size": 63488 00:08:29.046 }, 00:08:29.046 { 00:08:29.046 "name": null, 00:08:29.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.046 "is_configured": false, 00:08:29.046 "data_offset": 0, 00:08:29.046 "data_size": 63488 00:08:29.046 }, 00:08:29.046 { 00:08:29.046 "name": null, 00:08:29.046 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:29.046 "is_configured": false, 00:08:29.046 "data_offset": 2048, 00:08:29.046 "data_size": 63488 00:08:29.046 }, 00:08:29.046 { 00:08:29.046 "name": null, 00:08:29.046 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:29.046 "is_configured": false, 00:08:29.046 "data_offset": 2048, 00:08:29.046 "data_size": 63488 00:08:29.046 } 00:08:29.046 ] 00:08:29.046 }' 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.046 14:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.614 14:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.614 [2024-10-01 14:33:21.006403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:29.614 [2024-10-01 14:33:21.006463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.614 [2024-10-01 14:33:21.006484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:08:29.614 [2024-10-01 14:33:21.006495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.614 [2024-10-01 14:33:21.006921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.614 [2024-10-01 14:33:21.006936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:29.614 [2024-10-01 14:33:21.007011] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:29.614 [2024-10-01 14:33:21.007033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.614 pt2 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.614 [2024-10-01 14:33:21.014385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:29.614 [2024-10-01 14:33:21.014427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.614 [2024-10-01 14:33:21.014448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:08:29.614 [2024-10-01 14:33:21.014456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.614 [2024-10-01 14:33:21.014818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.614 [2024-10-01 14:33:21.014830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:29.614 [2024-10-01 14:33:21.014886] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:29.614 [2024-10-01 14:33:21.014903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:29.614 pt3 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.614 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.614 [2024-10-01 14:33:21.022360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:08:29.614 [2024-10-01 14:33:21.022401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.614 [2024-10-01 14:33:21.022416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:08:29.614 [2024-10-01 14:33:21.022423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.614 [2024-10-01 14:33:21.022775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.614 [2024-10-01 14:33:21.022788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:08:29.614 [2024-10-01 14:33:21.022842] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:08:29.614 [2024-10-01 14:33:21.022861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:08:29.615 [2024-10-01 14:33:21.022982] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:29.615 [2024-10-01 14:33:21.022990] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:29.615 [2024-10-01 14:33:21.023223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:29.615 [2024-10-01 14:33:21.023351] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:29.615 [2024-10-01 14:33:21.023362] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:29.615 [2024-10-01 14:33:21.023476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.615 pt4 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.615 "name": "raid_bdev1", 00:08:29.615 "uuid": "09a83a47-f1ac-4895-b829-d33df01fad1e", 00:08:29.615 "strip_size_kb": 64, 00:08:29.615 "state": "online", 00:08:29.615 "raid_level": "raid0", 00:08:29.615 "superblock": true, 00:08:29.615 "num_base_bdevs": 4, 00:08:29.615 "num_base_bdevs_discovered": 4, 00:08:29.615 "num_base_bdevs_operational": 4, 00:08:29.615 "base_bdevs_list": [ 00:08:29.615 { 00:08:29.615 "name": "pt1", 00:08:29.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.615 "is_configured": true, 00:08:29.615 "data_offset": 2048, 00:08:29.615 "data_size": 63488 00:08:29.615 }, 00:08:29.615 { 00:08:29.615 "name": "pt2", 00:08:29.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.615 "is_configured": true, 00:08:29.615 "data_offset": 2048, 00:08:29.615 "data_size": 63488 00:08:29.615 }, 00:08:29.615 { 00:08:29.615 "name": "pt3", 00:08:29.615 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:29.615 "is_configured": true, 00:08:29.615 "data_offset": 2048, 00:08:29.615 "data_size": 63488 00:08:29.615 }, 00:08:29.615 { 00:08:29.615 "name": "pt4", 00:08:29.615 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:29.615 "is_configured": true, 00:08:29.615 "data_offset": 2048, 00:08:29.615 "data_size": 63488 00:08:29.615 } 00:08:29.615 ] 00:08:29.615 }' 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.615 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.874 [2024-10-01 14:33:21.338839] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.874 "name": "raid_bdev1", 00:08:29.874 "aliases": [ 00:08:29.874 "09a83a47-f1ac-4895-b829-d33df01fad1e" 00:08:29.874 ], 00:08:29.874 "product_name": "Raid Volume", 00:08:29.874 "block_size": 512, 00:08:29.874 "num_blocks": 253952, 00:08:29.874 "uuid": "09a83a47-f1ac-4895-b829-d33df01fad1e", 00:08:29.874 "assigned_rate_limits": { 00:08:29.874 "rw_ios_per_sec": 0, 00:08:29.874 "rw_mbytes_per_sec": 0, 00:08:29.874 "r_mbytes_per_sec": 0, 00:08:29.874 "w_mbytes_per_sec": 0 00:08:29.874 }, 00:08:29.874 "claimed": false, 00:08:29.874 "zoned": false, 00:08:29.874 "supported_io_types": { 00:08:29.874 "read": true, 00:08:29.874 "write": true, 00:08:29.874 "unmap": true, 00:08:29.874 "flush": true, 00:08:29.874 "reset": true, 00:08:29.874 "nvme_admin": false, 00:08:29.874 "nvme_io": false, 00:08:29.874 "nvme_io_md": false, 00:08:29.874 "write_zeroes": true, 00:08:29.874 "zcopy": false, 00:08:29.874 "get_zone_info": false, 00:08:29.874 "zone_management": false, 00:08:29.874 "zone_append": false, 00:08:29.874 "compare": false, 00:08:29.874 "compare_and_write": false, 00:08:29.874 "abort": false, 00:08:29.874 "seek_hole": false, 00:08:29.874 "seek_data": false, 00:08:29.874 "copy": false, 00:08:29.874 "nvme_iov_md": false 00:08:29.874 }, 00:08:29.874 "memory_domains": [ 00:08:29.874 { 00:08:29.874 "dma_device_id": "system", 00:08:29.874 "dma_device_type": 1 00:08:29.874 }, 00:08:29.874 { 00:08:29.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.874 "dma_device_type": 2 00:08:29.874 }, 00:08:29.874 { 00:08:29.874 "dma_device_id": "system", 00:08:29.874 "dma_device_type": 1 00:08:29.874 }, 00:08:29.874 { 00:08:29.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.874 "dma_device_type": 2 00:08:29.874 }, 00:08:29.874 { 00:08:29.874 "dma_device_id": "system", 00:08:29.874 "dma_device_type": 1 00:08:29.874 }, 00:08:29.874 { 00:08:29.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.874 "dma_device_type": 2 00:08:29.874 }, 00:08:29.874 { 00:08:29.874 "dma_device_id": "system", 00:08:29.874 "dma_device_type": 1 00:08:29.874 }, 00:08:29.874 { 00:08:29.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.874 "dma_device_type": 2 00:08:29.874 } 00:08:29.874 ], 00:08:29.874 "driver_specific": { 00:08:29.874 "raid": { 00:08:29.874 "uuid": "09a83a47-f1ac-4895-b829-d33df01fad1e", 00:08:29.874 "strip_size_kb": 64, 00:08:29.874 "state": "online", 00:08:29.874 "raid_level": "raid0", 00:08:29.874 "superblock": true, 00:08:29.874 "num_base_bdevs": 4, 00:08:29.874 "num_base_bdevs_discovered": 4, 00:08:29.874 "num_base_bdevs_operational": 4, 00:08:29.874 "base_bdevs_list": [ 00:08:29.874 { 00:08:29.874 "name": "pt1", 00:08:29.874 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.874 "is_configured": true, 00:08:29.874 "data_offset": 2048, 00:08:29.874 "data_size": 63488 00:08:29.874 }, 00:08:29.874 { 00:08:29.874 "name": "pt2", 00:08:29.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.874 "is_configured": true, 00:08:29.874 "data_offset": 2048, 00:08:29.874 "data_size": 63488 00:08:29.874 }, 00:08:29.874 { 00:08:29.874 "name": "pt3", 00:08:29.874 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:29.874 "is_configured": true, 00:08:29.874 "data_offset": 2048, 00:08:29.874 "data_size": 63488 00:08:29.874 }, 00:08:29.874 { 00:08:29.874 "name": "pt4", 00:08:29.874 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:29.874 "is_configured": true, 00:08:29.874 "data_offset": 2048, 00:08:29.874 "data_size": 63488 00:08:29.874 } 00:08:29.874 ] 00:08:29.874 } 00:08:29.874 } 00:08:29.874 }' 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:29.874 pt2 00:08:29.874 pt3 00:08:29.874 pt4' 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.874 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:30.135 [2024-10-01 14:33:21.558861] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 09a83a47-f1ac-4895-b829-d33df01fad1e '!=' 09a83a47-f1ac-4895-b829-d33df01fad1e ']' 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69151 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 69151 ']' 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 69151 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69151 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.135 killing process with pid 69151 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69151' 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 69151 00:08:30.135 [2024-10-01 14:33:21.614442] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.135 14:33:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 69151 00:08:30.135 [2024-10-01 14:33:21.614523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.135 [2024-10-01 14:33:21.614593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.135 [2024-10-01 14:33:21.614603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:30.396 [2024-10-01 14:33:21.861379] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.338 ************************************ 00:08:31.338 END TEST raid_superblock_test 00:08:31.338 ************************************ 00:08:31.338 14:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:31.338 00:08:31.338 real 0m4.218s 00:08:31.338 user 0m6.021s 00:08:31.338 sys 0m0.608s 00:08:31.338 14:33:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.338 14:33:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.338 14:33:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:08:31.338 14:33:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:31.338 14:33:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.338 14:33:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.338 ************************************ 00:08:31.338 START TEST raid_read_error_test 00:08:31.338 ************************************ 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:31.338 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.x45HhZoZvp 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69399 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69399 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 69399 ']' 00:08:31.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:31.339 14:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.339 [2024-10-01 14:33:22.817609] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:31.339 [2024-10-01 14:33:22.817777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69399 ] 00:08:31.339 [2024-10-01 14:33:22.969883] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.599 [2024-10-01 14:33:23.159240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.860 [2024-10-01 14:33:23.296153] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.860 [2024-10-01 14:33:23.296188] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.153 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.153 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:32.153 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.153 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:32.153 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.153 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.153 BaseBdev1_malloc 00:08:32.153 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.154 true 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.154 [2024-10-01 14:33:23.713902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:32.154 [2024-10-01 14:33:23.713955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.154 [2024-10-01 14:33:23.713973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:32.154 [2024-10-01 14:33:23.713985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.154 [2024-10-01 14:33:23.716154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.154 [2024-10-01 14:33:23.716192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:32.154 BaseBdev1 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.154 BaseBdev2_malloc 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.154 true 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.154 [2024-10-01 14:33:23.775281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:32.154 [2024-10-01 14:33:23.775338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.154 [2024-10-01 14:33:23.775356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:32.154 [2024-10-01 14:33:23.775367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.154 [2024-10-01 14:33:23.777591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.154 [2024-10-01 14:33:23.777640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:32.154 BaseBdev2 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.154 BaseBdev3_malloc 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.154 true 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.154 [2024-10-01 14:33:23.819567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:32.154 [2024-10-01 14:33:23.819617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.154 [2024-10-01 14:33:23.819634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:32.154 [2024-10-01 14:33:23.819645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.154 [2024-10-01 14:33:23.821781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.154 [2024-10-01 14:33:23.821820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:32.154 BaseBdev3 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.154 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.414 BaseBdev4_malloc 00:08:32.414 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.414 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:08:32.414 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.414 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.414 true 00:08:32.414 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.414 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:08:32.414 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.414 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.414 [2024-10-01 14:33:23.868032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:08:32.414 [2024-10-01 14:33:23.868088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.414 [2024-10-01 14:33:23.868106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:32.414 [2024-10-01 14:33:23.868120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.414 [2024-10-01 14:33:23.870298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.415 [2024-10-01 14:33:23.870339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:08:32.415 BaseBdev4 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.415 [2024-10-01 14:33:23.876112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.415 [2024-10-01 14:33:23.877999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.415 [2024-10-01 14:33:23.878080] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:32.415 [2024-10-01 14:33:23.878144] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:32.415 [2024-10-01 14:33:23.878374] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:08:32.415 [2024-10-01 14:33:23.878393] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:32.415 [2024-10-01 14:33:23.878656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:32.415 [2024-10-01 14:33:23.878816] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:08:32.415 [2024-10-01 14:33:23.878829] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:08:32.415 [2024-10-01 14:33:23.878986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.415 "name": "raid_bdev1", 00:08:32.415 "uuid": "39bc85a2-decd-4467-9ecd-6b8e443fee5c", 00:08:32.415 "strip_size_kb": 64, 00:08:32.415 "state": "online", 00:08:32.415 "raid_level": "raid0", 00:08:32.415 "superblock": true, 00:08:32.415 "num_base_bdevs": 4, 00:08:32.415 "num_base_bdevs_discovered": 4, 00:08:32.415 "num_base_bdevs_operational": 4, 00:08:32.415 "base_bdevs_list": [ 00:08:32.415 { 00:08:32.415 "name": "BaseBdev1", 00:08:32.415 "uuid": "304ef23f-7113-5415-b2e4-0b21a09c7146", 00:08:32.415 "is_configured": true, 00:08:32.415 "data_offset": 2048, 00:08:32.415 "data_size": 63488 00:08:32.415 }, 00:08:32.415 { 00:08:32.415 "name": "BaseBdev2", 00:08:32.415 "uuid": "81b9a46a-6bfb-5800-b3d0-01ff140478c2", 00:08:32.415 "is_configured": true, 00:08:32.415 "data_offset": 2048, 00:08:32.415 "data_size": 63488 00:08:32.415 }, 00:08:32.415 { 00:08:32.415 "name": "BaseBdev3", 00:08:32.415 "uuid": "b951a3ee-a321-5f9c-8115-c2283201eada", 00:08:32.415 "is_configured": true, 00:08:32.415 "data_offset": 2048, 00:08:32.415 "data_size": 63488 00:08:32.415 }, 00:08:32.415 { 00:08:32.415 "name": "BaseBdev4", 00:08:32.415 "uuid": "d4633ee7-3931-5581-a406-b2cd5cb0860a", 00:08:32.415 "is_configured": true, 00:08:32.415 "data_offset": 2048, 00:08:32.415 "data_size": 63488 00:08:32.415 } 00:08:32.415 ] 00:08:32.415 }' 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.415 14:33:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.675 14:33:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:32.675 14:33:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:32.675 [2024-10-01 14:33:24.293129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.618 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.618 "name": "raid_bdev1", 00:08:33.618 "uuid": "39bc85a2-decd-4467-9ecd-6b8e443fee5c", 00:08:33.618 "strip_size_kb": 64, 00:08:33.618 "state": "online", 00:08:33.618 "raid_level": "raid0", 00:08:33.618 "superblock": true, 00:08:33.618 "num_base_bdevs": 4, 00:08:33.618 "num_base_bdevs_discovered": 4, 00:08:33.618 "num_base_bdevs_operational": 4, 00:08:33.618 "base_bdevs_list": [ 00:08:33.618 { 00:08:33.618 "name": "BaseBdev1", 00:08:33.618 "uuid": "304ef23f-7113-5415-b2e4-0b21a09c7146", 00:08:33.618 "is_configured": true, 00:08:33.618 "data_offset": 2048, 00:08:33.618 "data_size": 63488 00:08:33.618 }, 00:08:33.618 { 00:08:33.618 "name": "BaseBdev2", 00:08:33.618 "uuid": "81b9a46a-6bfb-5800-b3d0-01ff140478c2", 00:08:33.618 "is_configured": true, 00:08:33.618 "data_offset": 2048, 00:08:33.618 "data_size": 63488 00:08:33.618 }, 00:08:33.618 { 00:08:33.618 "name": "BaseBdev3", 00:08:33.618 "uuid": "b951a3ee-a321-5f9c-8115-c2283201eada", 00:08:33.618 "is_configured": true, 00:08:33.618 "data_offset": 2048, 00:08:33.618 "data_size": 63488 00:08:33.618 }, 00:08:33.619 { 00:08:33.619 "name": "BaseBdev4", 00:08:33.619 "uuid": "d4633ee7-3931-5581-a406-b2cd5cb0860a", 00:08:33.619 "is_configured": true, 00:08:33.619 "data_offset": 2048, 00:08:33.619 "data_size": 63488 00:08:33.619 } 00:08:33.619 ] 00:08:33.619 }' 00:08:33.619 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.619 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.880 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.880 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.880 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.880 [2024-10-01 14:33:25.535228] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.880 [2024-10-01 14:33:25.535262] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.880 [2024-10-01 14:33:25.538338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.880 [2024-10-01 14:33:25.538394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.880 [2024-10-01 14:33:25.538439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.880 [2024-10-01 14:33:25.538450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:08:33.880 { 00:08:33.880 "results": [ 00:08:33.880 { 00:08:33.880 "job": "raid_bdev1", 00:08:33.880 "core_mask": "0x1", 00:08:33.880 "workload": "randrw", 00:08:33.880 "percentage": 50, 00:08:33.880 "status": "finished", 00:08:33.880 "queue_depth": 1, 00:08:33.880 "io_size": 131072, 00:08:33.880 "runtime": 1.240169, 00:08:33.880 "iops": 14670.58118691888, 00:08:33.880 "mibps": 1833.82264836486, 00:08:33.880 "io_failed": 1, 00:08:33.880 "io_timeout": 0, 00:08:33.880 "avg_latency_us": 93.44057902635974, 00:08:33.880 "min_latency_us": 33.47692307692308, 00:08:33.880 "max_latency_us": 1701.4153846153847 00:08:33.880 } 00:08:33.880 ], 00:08:33.880 "core_count": 1 00:08:33.880 } 00:08:33.880 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.880 14:33:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69399 00:08:33.880 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 69399 ']' 00:08:33.880 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 69399 00:08:33.880 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:33.880 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.880 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69399 00:08:34.141 killing process with pid 69399 00:08:34.141 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.141 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.141 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69399' 00:08:34.141 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 69399 00:08:34.141 [2024-10-01 14:33:25.564932] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.141 14:33:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 69399 00:08:34.141 [2024-10-01 14:33:25.764756] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.086 14:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.x45HhZoZvp 00:08:35.086 14:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:35.086 14:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:35.086 14:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:08:35.086 14:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:35.086 14:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.086 14:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.086 14:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:08:35.086 00:08:35.086 real 0m3.888s 00:08:35.086 user 0m4.568s 00:08:35.086 sys 0m0.418s 00:08:35.086 ************************************ 00:08:35.086 END TEST raid_read_error_test 00:08:35.086 ************************************ 00:08:35.086 14:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.086 14:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.086 14:33:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:08:35.086 14:33:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:35.086 14:33:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.086 14:33:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.086 ************************************ 00:08:35.086 START TEST raid_write_error_test 00:08:35.086 ************************************ 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kaZQTBhQ3P 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69539 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69539 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69539 ']' 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.086 14:33:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.346 [2024-10-01 14:33:26.773168] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:35.346 [2024-10-01 14:33:26.773292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69539 ] 00:08:35.346 [2024-10-01 14:33:26.923125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.605 [2024-10-01 14:33:27.113918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.605 [2024-10-01 14:33:27.250333] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.605 [2024-10-01 14:33:27.250374] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.247 BaseBdev1_malloc 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.247 true 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.247 [2024-10-01 14:33:27.668572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:36.247 [2024-10-01 14:33:27.668625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.247 [2024-10-01 14:33:27.668645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:36.247 [2024-10-01 14:33:27.668657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.247 [2024-10-01 14:33:27.670833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.247 [2024-10-01 14:33:27.670868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:36.247 BaseBdev1 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.247 BaseBdev2_malloc 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.247 true 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.247 [2024-10-01 14:33:27.721563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:36.247 [2024-10-01 14:33:27.721613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.247 [2024-10-01 14:33:27.721629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:36.247 [2024-10-01 14:33:27.721640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.247 [2024-10-01 14:33:27.723741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.247 [2024-10-01 14:33:27.723774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:36.247 BaseBdev2 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.247 BaseBdev3_malloc 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.247 true 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.247 [2024-10-01 14:33:27.765397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:36.247 [2024-10-01 14:33:27.765440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.247 [2024-10-01 14:33:27.765455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:36.247 [2024-10-01 14:33:27.765465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.247 [2024-10-01 14:33:27.767566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.247 [2024-10-01 14:33:27.767720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:36.247 BaseBdev3 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.247 BaseBdev4_malloc 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.247 true 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.247 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.247 [2024-10-01 14:33:27.809363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:08:36.247 [2024-10-01 14:33:27.809407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.247 [2024-10-01 14:33:27.809424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:36.247 [2024-10-01 14:33:27.809437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.248 [2024-10-01 14:33:27.811522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.248 [2024-10-01 14:33:27.811655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:08:36.248 BaseBdev4 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.248 [2024-10-01 14:33:27.817446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.248 [2024-10-01 14:33:27.819404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.248 [2024-10-01 14:33:27.819558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.248 [2024-10-01 14:33:27.819649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:36.248 [2024-10-01 14:33:27.819939] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:08:36.248 [2024-10-01 14:33:27.820015] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:36.248 [2024-10-01 14:33:27.820279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:36.248 [2024-10-01 14:33:27.820425] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:08:36.248 [2024-10-01 14:33:27.820434] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:08:36.248 [2024-10-01 14:33:27.820579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.248 "name": "raid_bdev1", 00:08:36.248 "uuid": "d51983b6-d74b-48e3-a11e-b6b4712f8a7f", 00:08:36.248 "strip_size_kb": 64, 00:08:36.248 "state": "online", 00:08:36.248 "raid_level": "raid0", 00:08:36.248 "superblock": true, 00:08:36.248 "num_base_bdevs": 4, 00:08:36.248 "num_base_bdevs_discovered": 4, 00:08:36.248 "num_base_bdevs_operational": 4, 00:08:36.248 "base_bdevs_list": [ 00:08:36.248 { 00:08:36.248 "name": "BaseBdev1", 00:08:36.248 "uuid": "1a613fbe-dec0-5f58-8693-26516c5ec1a9", 00:08:36.248 "is_configured": true, 00:08:36.248 "data_offset": 2048, 00:08:36.248 "data_size": 63488 00:08:36.248 }, 00:08:36.248 { 00:08:36.248 "name": "BaseBdev2", 00:08:36.248 "uuid": "670915fd-f86a-5d0d-b768-e73ef77216f8", 00:08:36.248 "is_configured": true, 00:08:36.248 "data_offset": 2048, 00:08:36.248 "data_size": 63488 00:08:36.248 }, 00:08:36.248 { 00:08:36.248 "name": "BaseBdev3", 00:08:36.248 "uuid": "d986da47-47e5-5b0a-9b04-1bedf421f74c", 00:08:36.248 "is_configured": true, 00:08:36.248 "data_offset": 2048, 00:08:36.248 "data_size": 63488 00:08:36.248 }, 00:08:36.248 { 00:08:36.248 "name": "BaseBdev4", 00:08:36.248 "uuid": "0daa9184-afaa-59cb-a886-73eb516e20ac", 00:08:36.248 "is_configured": true, 00:08:36.248 "data_offset": 2048, 00:08:36.248 "data_size": 63488 00:08:36.248 } 00:08:36.248 ] 00:08:36.248 }' 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.248 14:33:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.509 14:33:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:36.509 14:33:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:36.770 [2024-10-01 14:33:28.214465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.711 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.712 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.712 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.712 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.712 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.712 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.712 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.712 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.712 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.712 "name": "raid_bdev1", 00:08:37.712 "uuid": "d51983b6-d74b-48e3-a11e-b6b4712f8a7f", 00:08:37.712 "strip_size_kb": 64, 00:08:37.712 "state": "online", 00:08:37.712 "raid_level": "raid0", 00:08:37.712 "superblock": true, 00:08:37.712 "num_base_bdevs": 4, 00:08:37.712 "num_base_bdevs_discovered": 4, 00:08:37.712 "num_base_bdevs_operational": 4, 00:08:37.712 "base_bdevs_list": [ 00:08:37.712 { 00:08:37.712 "name": "BaseBdev1", 00:08:37.712 "uuid": "1a613fbe-dec0-5f58-8693-26516c5ec1a9", 00:08:37.712 "is_configured": true, 00:08:37.712 "data_offset": 2048, 00:08:37.712 "data_size": 63488 00:08:37.712 }, 00:08:37.712 { 00:08:37.712 "name": "BaseBdev2", 00:08:37.712 "uuid": "670915fd-f86a-5d0d-b768-e73ef77216f8", 00:08:37.712 "is_configured": true, 00:08:37.712 "data_offset": 2048, 00:08:37.712 "data_size": 63488 00:08:37.712 }, 00:08:37.712 { 00:08:37.712 "name": "BaseBdev3", 00:08:37.712 "uuid": "d986da47-47e5-5b0a-9b04-1bedf421f74c", 00:08:37.712 "is_configured": true, 00:08:37.712 "data_offset": 2048, 00:08:37.712 "data_size": 63488 00:08:37.712 }, 00:08:37.712 { 00:08:37.712 "name": "BaseBdev4", 00:08:37.712 "uuid": "0daa9184-afaa-59cb-a886-73eb516e20ac", 00:08:37.712 "is_configured": true, 00:08:37.712 "data_offset": 2048, 00:08:37.712 "data_size": 63488 00:08:37.712 } 00:08:37.712 ] 00:08:37.712 }' 00:08:37.712 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.712 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.974 [2024-10-01 14:33:29.464632] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:37.974 [2024-10-01 14:33:29.464663] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.974 [2024-10-01 14:33:29.467923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.974 [2024-10-01 14:33:29.468057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.974 [2024-10-01 14:33:29.468126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.974 [2024-10-01 14:33:29.468200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:08:37.974 { 00:08:37.974 "results": [ 00:08:37.974 { 00:08:37.974 "job": "raid_bdev1", 00:08:37.974 "core_mask": "0x1", 00:08:37.974 "workload": "randrw", 00:08:37.974 "percentage": 50, 00:08:37.974 "status": "finished", 00:08:37.974 "queue_depth": 1, 00:08:37.974 "io_size": 131072, 00:08:37.974 "runtime": 1.248161, 00:08:37.974 "iops": 14637.534741111123, 00:08:37.974 "mibps": 1829.6918426388904, 00:08:37.974 "io_failed": 1, 00:08:37.974 "io_timeout": 0, 00:08:37.974 "avg_latency_us": 93.6480835961149, 00:08:37.974 "min_latency_us": 33.47692307692308, 00:08:37.974 "max_latency_us": 1701.4153846153847 00:08:37.974 } 00:08:37.974 ], 00:08:37.974 "core_count": 1 00:08:37.974 } 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69539 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69539 ']' 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69539 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69539 00:08:37.974 killing process with pid 69539 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69539' 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69539 00:08:37.974 [2024-10-01 14:33:29.501174] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.974 14:33:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69539 00:08:38.236 [2024-10-01 14:33:29.702398] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.178 14:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:39.178 14:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kaZQTBhQ3P 00:08:39.178 14:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:39.178 14:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:08:39.178 14:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:39.178 14:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.178 14:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.178 ************************************ 00:08:39.178 END TEST raid_write_error_test 00:08:39.178 ************************************ 00:08:39.178 14:33:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:08:39.178 00:08:39.178 real 0m3.877s 00:08:39.178 user 0m4.533s 00:08:39.178 sys 0m0.419s 00:08:39.178 14:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.178 14:33:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.178 14:33:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:39.178 14:33:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:08:39.178 14:33:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:39.178 14:33:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.178 14:33:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.178 ************************************ 00:08:39.178 START TEST raid_state_function_test 00:08:39.178 ************************************ 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.178 Process raid pid: 69672 00:08:39.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69672 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69672' 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69672 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69672 ']' 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.178 14:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.179 14:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.179 [2024-10-01 14:33:30.715615] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:39.179 [2024-10-01 14:33:30.715930] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.441 [2024-10-01 14:33:30.867502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.441 [2024-10-01 14:33:31.057571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.701 [2024-10-01 14:33:31.194992] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.701 [2024-10-01 14:33:31.195176] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.966 [2024-10-01 14:33:31.581039] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.966 [2024-10-01 14:33:31.581188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.966 [2024-10-01 14:33:31.581249] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.966 [2024-10-01 14:33:31.581277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.966 [2024-10-01 14:33:31.581295] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:39.966 [2024-10-01 14:33:31.581334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:39.966 [2024-10-01 14:33:31.581353] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:39.966 [2024-10-01 14:33:31.581373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.966 "name": "Existed_Raid", 00:08:39.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.966 "strip_size_kb": 64, 00:08:39.966 "state": "configuring", 00:08:39.966 "raid_level": "concat", 00:08:39.966 "superblock": false, 00:08:39.966 "num_base_bdevs": 4, 00:08:39.966 "num_base_bdevs_discovered": 0, 00:08:39.966 "num_base_bdevs_operational": 4, 00:08:39.966 "base_bdevs_list": [ 00:08:39.966 { 00:08:39.966 "name": "BaseBdev1", 00:08:39.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.966 "is_configured": false, 00:08:39.966 "data_offset": 0, 00:08:39.966 "data_size": 0 00:08:39.966 }, 00:08:39.966 { 00:08:39.966 "name": "BaseBdev2", 00:08:39.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.966 "is_configured": false, 00:08:39.966 "data_offset": 0, 00:08:39.966 "data_size": 0 00:08:39.966 }, 00:08:39.966 { 00:08:39.966 "name": "BaseBdev3", 00:08:39.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.966 "is_configured": false, 00:08:39.966 "data_offset": 0, 00:08:39.966 "data_size": 0 00:08:39.966 }, 00:08:39.966 { 00:08:39.966 "name": "BaseBdev4", 00:08:39.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.966 "is_configured": false, 00:08:39.966 "data_offset": 0, 00:08:39.966 "data_size": 0 00:08:39.966 } 00:08:39.966 ] 00:08:39.966 }' 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.966 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.247 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.247 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.247 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.247 [2024-10-01 14:33:31.897040] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.247 [2024-10-01 14:33:31.897075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:40.247 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.247 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:40.247 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.247 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.247 [2024-10-01 14:33:31.905056] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.247 [2024-10-01 14:33:31.905093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.247 [2024-10-01 14:33:31.905102] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.247 [2024-10-01 14:33:31.905110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.247 [2024-10-01 14:33:31.905116] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:40.247 [2024-10-01 14:33:31.905125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:40.247 [2024-10-01 14:33:31.905131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:40.247 [2024-10-01 14:33:31.905139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:40.247 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.247 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:40.247 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.247 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 [2024-10-01 14:33:31.949917] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.508 BaseBdev1 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.508 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 [ 00:08:40.508 { 00:08:40.508 "name": "BaseBdev1", 00:08:40.508 "aliases": [ 00:08:40.508 "b8fcc672-87a3-43cd-bb9a-90a2a8be7898" 00:08:40.508 ], 00:08:40.508 "product_name": "Malloc disk", 00:08:40.508 "block_size": 512, 00:08:40.508 "num_blocks": 65536, 00:08:40.509 "uuid": "b8fcc672-87a3-43cd-bb9a-90a2a8be7898", 00:08:40.509 "assigned_rate_limits": { 00:08:40.509 "rw_ios_per_sec": 0, 00:08:40.509 "rw_mbytes_per_sec": 0, 00:08:40.509 "r_mbytes_per_sec": 0, 00:08:40.509 "w_mbytes_per_sec": 0 00:08:40.509 }, 00:08:40.509 "claimed": true, 00:08:40.509 "claim_type": "exclusive_write", 00:08:40.509 "zoned": false, 00:08:40.509 "supported_io_types": { 00:08:40.509 "read": true, 00:08:40.509 "write": true, 00:08:40.509 "unmap": true, 00:08:40.509 "flush": true, 00:08:40.509 "reset": true, 00:08:40.509 "nvme_admin": false, 00:08:40.509 "nvme_io": false, 00:08:40.509 "nvme_io_md": false, 00:08:40.509 "write_zeroes": true, 00:08:40.509 "zcopy": true, 00:08:40.509 "get_zone_info": false, 00:08:40.509 "zone_management": false, 00:08:40.509 "zone_append": false, 00:08:40.509 "compare": false, 00:08:40.509 "compare_and_write": false, 00:08:40.509 "abort": true, 00:08:40.509 "seek_hole": false, 00:08:40.509 "seek_data": false, 00:08:40.509 "copy": true, 00:08:40.509 "nvme_iov_md": false 00:08:40.509 }, 00:08:40.509 "memory_domains": [ 00:08:40.509 { 00:08:40.509 "dma_device_id": "system", 00:08:40.509 "dma_device_type": 1 00:08:40.509 }, 00:08:40.509 { 00:08:40.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.509 "dma_device_type": 2 00:08:40.509 } 00:08:40.509 ], 00:08:40.509 "driver_specific": {} 00:08:40.509 } 00:08:40.509 ] 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.509 14:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.509 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.509 "name": "Existed_Raid", 00:08:40.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.509 "strip_size_kb": 64, 00:08:40.509 "state": "configuring", 00:08:40.509 "raid_level": "concat", 00:08:40.509 "superblock": false, 00:08:40.509 "num_base_bdevs": 4, 00:08:40.509 "num_base_bdevs_discovered": 1, 00:08:40.509 "num_base_bdevs_operational": 4, 00:08:40.509 "base_bdevs_list": [ 00:08:40.509 { 00:08:40.509 "name": "BaseBdev1", 00:08:40.509 "uuid": "b8fcc672-87a3-43cd-bb9a-90a2a8be7898", 00:08:40.509 "is_configured": true, 00:08:40.509 "data_offset": 0, 00:08:40.509 "data_size": 65536 00:08:40.509 }, 00:08:40.509 { 00:08:40.509 "name": "BaseBdev2", 00:08:40.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.509 "is_configured": false, 00:08:40.509 "data_offset": 0, 00:08:40.509 "data_size": 0 00:08:40.509 }, 00:08:40.509 { 00:08:40.509 "name": "BaseBdev3", 00:08:40.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.509 "is_configured": false, 00:08:40.509 "data_offset": 0, 00:08:40.509 "data_size": 0 00:08:40.509 }, 00:08:40.509 { 00:08:40.509 "name": "BaseBdev4", 00:08:40.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.509 "is_configured": false, 00:08:40.509 "data_offset": 0, 00:08:40.509 "data_size": 0 00:08:40.509 } 00:08:40.509 ] 00:08:40.509 }' 00:08:40.509 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.509 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.770 [2024-10-01 14:33:32.306032] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.770 [2024-10-01 14:33:32.306076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.770 [2024-10-01 14:33:32.314080] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.770 [2024-10-01 14:33:32.315952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.770 [2024-10-01 14:33:32.315988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.770 [2024-10-01 14:33:32.315998] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:40.770 [2024-10-01 14:33:32.316010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:40.770 [2024-10-01 14:33:32.316017] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:40.770 [2024-10-01 14:33:32.316026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.770 "name": "Existed_Raid", 00:08:40.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.770 "strip_size_kb": 64, 00:08:40.770 "state": "configuring", 00:08:40.770 "raid_level": "concat", 00:08:40.770 "superblock": false, 00:08:40.770 "num_base_bdevs": 4, 00:08:40.770 "num_base_bdevs_discovered": 1, 00:08:40.770 "num_base_bdevs_operational": 4, 00:08:40.770 "base_bdevs_list": [ 00:08:40.770 { 00:08:40.770 "name": "BaseBdev1", 00:08:40.770 "uuid": "b8fcc672-87a3-43cd-bb9a-90a2a8be7898", 00:08:40.770 "is_configured": true, 00:08:40.770 "data_offset": 0, 00:08:40.770 "data_size": 65536 00:08:40.770 }, 00:08:40.770 { 00:08:40.770 "name": "BaseBdev2", 00:08:40.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.770 "is_configured": false, 00:08:40.770 "data_offset": 0, 00:08:40.770 "data_size": 0 00:08:40.770 }, 00:08:40.770 { 00:08:40.770 "name": "BaseBdev3", 00:08:40.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.770 "is_configured": false, 00:08:40.770 "data_offset": 0, 00:08:40.770 "data_size": 0 00:08:40.770 }, 00:08:40.770 { 00:08:40.770 "name": "BaseBdev4", 00:08:40.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.770 "is_configured": false, 00:08:40.770 "data_offset": 0, 00:08:40.770 "data_size": 0 00:08:40.770 } 00:08:40.770 ] 00:08:40.770 }' 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.770 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.031 [2024-10-01 14:33:32.676890] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.031 BaseBdev2 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.031 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.031 [ 00:08:41.031 { 00:08:41.031 "name": "BaseBdev2", 00:08:41.031 "aliases": [ 00:08:41.031 "d170ad15-ec0f-47ea-a693-b1dbdc703616" 00:08:41.031 ], 00:08:41.031 "product_name": "Malloc disk", 00:08:41.031 "block_size": 512, 00:08:41.031 "num_blocks": 65536, 00:08:41.031 "uuid": "d170ad15-ec0f-47ea-a693-b1dbdc703616", 00:08:41.031 "assigned_rate_limits": { 00:08:41.031 "rw_ios_per_sec": 0, 00:08:41.031 "rw_mbytes_per_sec": 0, 00:08:41.031 "r_mbytes_per_sec": 0, 00:08:41.031 "w_mbytes_per_sec": 0 00:08:41.031 }, 00:08:41.031 "claimed": true, 00:08:41.031 "claim_type": "exclusive_write", 00:08:41.031 "zoned": false, 00:08:41.031 "supported_io_types": { 00:08:41.031 "read": true, 00:08:41.031 "write": true, 00:08:41.031 "unmap": true, 00:08:41.031 "flush": true, 00:08:41.032 "reset": true, 00:08:41.032 "nvme_admin": false, 00:08:41.032 "nvme_io": false, 00:08:41.032 "nvme_io_md": false, 00:08:41.032 "write_zeroes": true, 00:08:41.032 "zcopy": true, 00:08:41.032 "get_zone_info": false, 00:08:41.032 "zone_management": false, 00:08:41.032 "zone_append": false, 00:08:41.032 "compare": false, 00:08:41.032 "compare_and_write": false, 00:08:41.032 "abort": true, 00:08:41.032 "seek_hole": false, 00:08:41.032 "seek_data": false, 00:08:41.032 "copy": true, 00:08:41.032 "nvme_iov_md": false 00:08:41.032 }, 00:08:41.032 "memory_domains": [ 00:08:41.032 { 00:08:41.032 "dma_device_id": "system", 00:08:41.032 "dma_device_type": 1 00:08:41.032 }, 00:08:41.032 { 00:08:41.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.032 "dma_device_type": 2 00:08:41.032 } 00:08:41.032 ], 00:08:41.032 "driver_specific": {} 00:08:41.032 } 00:08:41.032 ] 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.032 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.292 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.292 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.292 "name": "Existed_Raid", 00:08:41.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.292 "strip_size_kb": 64, 00:08:41.292 "state": "configuring", 00:08:41.292 "raid_level": "concat", 00:08:41.292 "superblock": false, 00:08:41.292 "num_base_bdevs": 4, 00:08:41.292 "num_base_bdevs_discovered": 2, 00:08:41.292 "num_base_bdevs_operational": 4, 00:08:41.292 "base_bdevs_list": [ 00:08:41.292 { 00:08:41.292 "name": "BaseBdev1", 00:08:41.292 "uuid": "b8fcc672-87a3-43cd-bb9a-90a2a8be7898", 00:08:41.292 "is_configured": true, 00:08:41.292 "data_offset": 0, 00:08:41.292 "data_size": 65536 00:08:41.292 }, 00:08:41.292 { 00:08:41.292 "name": "BaseBdev2", 00:08:41.292 "uuid": "d170ad15-ec0f-47ea-a693-b1dbdc703616", 00:08:41.292 "is_configured": true, 00:08:41.292 "data_offset": 0, 00:08:41.292 "data_size": 65536 00:08:41.292 }, 00:08:41.292 { 00:08:41.292 "name": "BaseBdev3", 00:08:41.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.292 "is_configured": false, 00:08:41.292 "data_offset": 0, 00:08:41.292 "data_size": 0 00:08:41.292 }, 00:08:41.292 { 00:08:41.292 "name": "BaseBdev4", 00:08:41.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.292 "is_configured": false, 00:08:41.292 "data_offset": 0, 00:08:41.292 "data_size": 0 00:08:41.292 } 00:08:41.292 ] 00:08:41.292 }' 00:08:41.292 14:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.292 14:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.552 [2024-10-01 14:33:33.035969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.552 BaseBdev3 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.552 [ 00:08:41.552 { 00:08:41.552 "name": "BaseBdev3", 00:08:41.552 "aliases": [ 00:08:41.552 "34ced5fa-5087-4e84-ac89-44b77050e868" 00:08:41.552 ], 00:08:41.552 "product_name": "Malloc disk", 00:08:41.552 "block_size": 512, 00:08:41.552 "num_blocks": 65536, 00:08:41.552 "uuid": "34ced5fa-5087-4e84-ac89-44b77050e868", 00:08:41.552 "assigned_rate_limits": { 00:08:41.552 "rw_ios_per_sec": 0, 00:08:41.552 "rw_mbytes_per_sec": 0, 00:08:41.552 "r_mbytes_per_sec": 0, 00:08:41.552 "w_mbytes_per_sec": 0 00:08:41.552 }, 00:08:41.552 "claimed": true, 00:08:41.552 "claim_type": "exclusive_write", 00:08:41.552 "zoned": false, 00:08:41.552 "supported_io_types": { 00:08:41.552 "read": true, 00:08:41.552 "write": true, 00:08:41.552 "unmap": true, 00:08:41.552 "flush": true, 00:08:41.552 "reset": true, 00:08:41.552 "nvme_admin": false, 00:08:41.552 "nvme_io": false, 00:08:41.552 "nvme_io_md": false, 00:08:41.552 "write_zeroes": true, 00:08:41.552 "zcopy": true, 00:08:41.552 "get_zone_info": false, 00:08:41.552 "zone_management": false, 00:08:41.552 "zone_append": false, 00:08:41.552 "compare": false, 00:08:41.552 "compare_and_write": false, 00:08:41.552 "abort": true, 00:08:41.552 "seek_hole": false, 00:08:41.552 "seek_data": false, 00:08:41.552 "copy": true, 00:08:41.552 "nvme_iov_md": false 00:08:41.552 }, 00:08:41.552 "memory_domains": [ 00:08:41.552 { 00:08:41.552 "dma_device_id": "system", 00:08:41.552 "dma_device_type": 1 00:08:41.552 }, 00:08:41.552 { 00:08:41.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.552 "dma_device_type": 2 00:08:41.552 } 00:08:41.552 ], 00:08:41.552 "driver_specific": {} 00:08:41.552 } 00:08:41.552 ] 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.552 "name": "Existed_Raid", 00:08:41.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.552 "strip_size_kb": 64, 00:08:41.552 "state": "configuring", 00:08:41.552 "raid_level": "concat", 00:08:41.552 "superblock": false, 00:08:41.552 "num_base_bdevs": 4, 00:08:41.552 "num_base_bdevs_discovered": 3, 00:08:41.552 "num_base_bdevs_operational": 4, 00:08:41.552 "base_bdevs_list": [ 00:08:41.552 { 00:08:41.552 "name": "BaseBdev1", 00:08:41.552 "uuid": "b8fcc672-87a3-43cd-bb9a-90a2a8be7898", 00:08:41.552 "is_configured": true, 00:08:41.552 "data_offset": 0, 00:08:41.552 "data_size": 65536 00:08:41.552 }, 00:08:41.552 { 00:08:41.552 "name": "BaseBdev2", 00:08:41.552 "uuid": "d170ad15-ec0f-47ea-a693-b1dbdc703616", 00:08:41.552 "is_configured": true, 00:08:41.552 "data_offset": 0, 00:08:41.552 "data_size": 65536 00:08:41.552 }, 00:08:41.552 { 00:08:41.552 "name": "BaseBdev3", 00:08:41.552 "uuid": "34ced5fa-5087-4e84-ac89-44b77050e868", 00:08:41.552 "is_configured": true, 00:08:41.552 "data_offset": 0, 00:08:41.552 "data_size": 65536 00:08:41.552 }, 00:08:41.552 { 00:08:41.552 "name": "BaseBdev4", 00:08:41.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.552 "is_configured": false, 00:08:41.552 "data_offset": 0, 00:08:41.552 "data_size": 0 00:08:41.552 } 00:08:41.552 ] 00:08:41.552 }' 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.552 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.812 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.813 [2024-10-01 14:33:33.402745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:41.813 [2024-10-01 14:33:33.402965] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:41.813 [2024-10-01 14:33:33.402979] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:41.813 [2024-10-01 14:33:33.403253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:41.813 [2024-10-01 14:33:33.403399] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:41.813 [2024-10-01 14:33:33.403409] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:41.813 [2024-10-01 14:33:33.403636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.813 BaseBdev4 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.813 [ 00:08:41.813 { 00:08:41.813 "name": "BaseBdev4", 00:08:41.813 "aliases": [ 00:08:41.813 "059eb06a-c614-4c50-b645-bdd50d700a62" 00:08:41.813 ], 00:08:41.813 "product_name": "Malloc disk", 00:08:41.813 "block_size": 512, 00:08:41.813 "num_blocks": 65536, 00:08:41.813 "uuid": "059eb06a-c614-4c50-b645-bdd50d700a62", 00:08:41.813 "assigned_rate_limits": { 00:08:41.813 "rw_ios_per_sec": 0, 00:08:41.813 "rw_mbytes_per_sec": 0, 00:08:41.813 "r_mbytes_per_sec": 0, 00:08:41.813 "w_mbytes_per_sec": 0 00:08:41.813 }, 00:08:41.813 "claimed": true, 00:08:41.813 "claim_type": "exclusive_write", 00:08:41.813 "zoned": false, 00:08:41.813 "supported_io_types": { 00:08:41.813 "read": true, 00:08:41.813 "write": true, 00:08:41.813 "unmap": true, 00:08:41.813 "flush": true, 00:08:41.813 "reset": true, 00:08:41.813 "nvme_admin": false, 00:08:41.813 "nvme_io": false, 00:08:41.813 "nvme_io_md": false, 00:08:41.813 "write_zeroes": true, 00:08:41.813 "zcopy": true, 00:08:41.813 "get_zone_info": false, 00:08:41.813 "zone_management": false, 00:08:41.813 "zone_append": false, 00:08:41.813 "compare": false, 00:08:41.813 "compare_and_write": false, 00:08:41.813 "abort": true, 00:08:41.813 "seek_hole": false, 00:08:41.813 "seek_data": false, 00:08:41.813 "copy": true, 00:08:41.813 "nvme_iov_md": false 00:08:41.813 }, 00:08:41.813 "memory_domains": [ 00:08:41.813 { 00:08:41.813 "dma_device_id": "system", 00:08:41.813 "dma_device_type": 1 00:08:41.813 }, 00:08:41.813 { 00:08:41.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.813 "dma_device_type": 2 00:08:41.813 } 00:08:41.813 ], 00:08:41.813 "driver_specific": {} 00:08:41.813 } 00:08:41.813 ] 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.813 "name": "Existed_Raid", 00:08:41.813 "uuid": "81df7c12-054c-46ae-bb0c-d16bbb529fb0", 00:08:41.813 "strip_size_kb": 64, 00:08:41.813 "state": "online", 00:08:41.813 "raid_level": "concat", 00:08:41.813 "superblock": false, 00:08:41.813 "num_base_bdevs": 4, 00:08:41.813 "num_base_bdevs_discovered": 4, 00:08:41.813 "num_base_bdevs_operational": 4, 00:08:41.813 "base_bdevs_list": [ 00:08:41.813 { 00:08:41.813 "name": "BaseBdev1", 00:08:41.813 "uuid": "b8fcc672-87a3-43cd-bb9a-90a2a8be7898", 00:08:41.813 "is_configured": true, 00:08:41.813 "data_offset": 0, 00:08:41.813 "data_size": 65536 00:08:41.813 }, 00:08:41.813 { 00:08:41.813 "name": "BaseBdev2", 00:08:41.813 "uuid": "d170ad15-ec0f-47ea-a693-b1dbdc703616", 00:08:41.813 "is_configured": true, 00:08:41.813 "data_offset": 0, 00:08:41.813 "data_size": 65536 00:08:41.813 }, 00:08:41.813 { 00:08:41.813 "name": "BaseBdev3", 00:08:41.813 "uuid": "34ced5fa-5087-4e84-ac89-44b77050e868", 00:08:41.813 "is_configured": true, 00:08:41.813 "data_offset": 0, 00:08:41.813 "data_size": 65536 00:08:41.813 }, 00:08:41.813 { 00:08:41.813 "name": "BaseBdev4", 00:08:41.813 "uuid": "059eb06a-c614-4c50-b645-bdd50d700a62", 00:08:41.813 "is_configured": true, 00:08:41.813 "data_offset": 0, 00:08:41.813 "data_size": 65536 00:08:41.813 } 00:08:41.813 ] 00:08:41.813 }' 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.813 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.073 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:42.073 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:42.073 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.073 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.073 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.073 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.073 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:42.073 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.073 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.073 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.073 [2024-10-01 14:33:33.751220] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.335 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.335 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.335 "name": "Existed_Raid", 00:08:42.335 "aliases": [ 00:08:42.335 "81df7c12-054c-46ae-bb0c-d16bbb529fb0" 00:08:42.335 ], 00:08:42.335 "product_name": "Raid Volume", 00:08:42.335 "block_size": 512, 00:08:42.335 "num_blocks": 262144, 00:08:42.335 "uuid": "81df7c12-054c-46ae-bb0c-d16bbb529fb0", 00:08:42.335 "assigned_rate_limits": { 00:08:42.335 "rw_ios_per_sec": 0, 00:08:42.335 "rw_mbytes_per_sec": 0, 00:08:42.335 "r_mbytes_per_sec": 0, 00:08:42.335 "w_mbytes_per_sec": 0 00:08:42.335 }, 00:08:42.335 "claimed": false, 00:08:42.335 "zoned": false, 00:08:42.335 "supported_io_types": { 00:08:42.335 "read": true, 00:08:42.335 "write": true, 00:08:42.335 "unmap": true, 00:08:42.335 "flush": true, 00:08:42.335 "reset": true, 00:08:42.335 "nvme_admin": false, 00:08:42.335 "nvme_io": false, 00:08:42.335 "nvme_io_md": false, 00:08:42.335 "write_zeroes": true, 00:08:42.335 "zcopy": false, 00:08:42.335 "get_zone_info": false, 00:08:42.335 "zone_management": false, 00:08:42.335 "zone_append": false, 00:08:42.335 "compare": false, 00:08:42.335 "compare_and_write": false, 00:08:42.335 "abort": false, 00:08:42.335 "seek_hole": false, 00:08:42.335 "seek_data": false, 00:08:42.335 "copy": false, 00:08:42.335 "nvme_iov_md": false 00:08:42.335 }, 00:08:42.335 "memory_domains": [ 00:08:42.335 { 00:08:42.335 "dma_device_id": "system", 00:08:42.335 "dma_device_type": 1 00:08:42.335 }, 00:08:42.335 { 00:08:42.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.335 "dma_device_type": 2 00:08:42.335 }, 00:08:42.335 { 00:08:42.335 "dma_device_id": "system", 00:08:42.335 "dma_device_type": 1 00:08:42.335 }, 00:08:42.335 { 00:08:42.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.335 "dma_device_type": 2 00:08:42.335 }, 00:08:42.335 { 00:08:42.335 "dma_device_id": "system", 00:08:42.335 "dma_device_type": 1 00:08:42.335 }, 00:08:42.335 { 00:08:42.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.335 "dma_device_type": 2 00:08:42.335 }, 00:08:42.335 { 00:08:42.335 "dma_device_id": "system", 00:08:42.335 "dma_device_type": 1 00:08:42.335 }, 00:08:42.335 { 00:08:42.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.335 "dma_device_type": 2 00:08:42.335 } 00:08:42.335 ], 00:08:42.335 "driver_specific": { 00:08:42.335 "raid": { 00:08:42.335 "uuid": "81df7c12-054c-46ae-bb0c-d16bbb529fb0", 00:08:42.335 "strip_size_kb": 64, 00:08:42.335 "state": "online", 00:08:42.335 "raid_level": "concat", 00:08:42.335 "superblock": false, 00:08:42.335 "num_base_bdevs": 4, 00:08:42.335 "num_base_bdevs_discovered": 4, 00:08:42.335 "num_base_bdevs_operational": 4, 00:08:42.335 "base_bdevs_list": [ 00:08:42.335 { 00:08:42.335 "name": "BaseBdev1", 00:08:42.335 "uuid": "b8fcc672-87a3-43cd-bb9a-90a2a8be7898", 00:08:42.335 "is_configured": true, 00:08:42.335 "data_offset": 0, 00:08:42.335 "data_size": 65536 00:08:42.335 }, 00:08:42.335 { 00:08:42.335 "name": "BaseBdev2", 00:08:42.335 "uuid": "d170ad15-ec0f-47ea-a693-b1dbdc703616", 00:08:42.335 "is_configured": true, 00:08:42.335 "data_offset": 0, 00:08:42.335 "data_size": 65536 00:08:42.335 }, 00:08:42.335 { 00:08:42.335 "name": "BaseBdev3", 00:08:42.335 "uuid": "34ced5fa-5087-4e84-ac89-44b77050e868", 00:08:42.335 "is_configured": true, 00:08:42.335 "data_offset": 0, 00:08:42.335 "data_size": 65536 00:08:42.335 }, 00:08:42.335 { 00:08:42.336 "name": "BaseBdev4", 00:08:42.336 "uuid": "059eb06a-c614-4c50-b645-bdd50d700a62", 00:08:42.336 "is_configured": true, 00:08:42.336 "data_offset": 0, 00:08:42.336 "data_size": 65536 00:08:42.336 } 00:08:42.336 ] 00:08:42.336 } 00:08:42.336 } 00:08:42.336 }' 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:42.336 BaseBdev2 00:08:42.336 BaseBdev3 00:08:42.336 BaseBdev4' 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.336 14:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.336 [2024-10-01 14:33:33.990966] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:42.336 [2024-10-01 14:33:33.990994] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.336 [2024-10-01 14:33:33.991044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.600 "name": "Existed_Raid", 00:08:42.600 "uuid": "81df7c12-054c-46ae-bb0c-d16bbb529fb0", 00:08:42.600 "strip_size_kb": 64, 00:08:42.600 "state": "offline", 00:08:42.600 "raid_level": "concat", 00:08:42.600 "superblock": false, 00:08:42.600 "num_base_bdevs": 4, 00:08:42.600 "num_base_bdevs_discovered": 3, 00:08:42.600 "num_base_bdevs_operational": 3, 00:08:42.600 "base_bdevs_list": [ 00:08:42.600 { 00:08:42.600 "name": null, 00:08:42.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.600 "is_configured": false, 00:08:42.600 "data_offset": 0, 00:08:42.600 "data_size": 65536 00:08:42.600 }, 00:08:42.600 { 00:08:42.600 "name": "BaseBdev2", 00:08:42.600 "uuid": "d170ad15-ec0f-47ea-a693-b1dbdc703616", 00:08:42.600 "is_configured": true, 00:08:42.600 "data_offset": 0, 00:08:42.600 "data_size": 65536 00:08:42.600 }, 00:08:42.600 { 00:08:42.600 "name": "BaseBdev3", 00:08:42.600 "uuid": "34ced5fa-5087-4e84-ac89-44b77050e868", 00:08:42.600 "is_configured": true, 00:08:42.600 "data_offset": 0, 00:08:42.600 "data_size": 65536 00:08:42.600 }, 00:08:42.600 { 00:08:42.600 "name": "BaseBdev4", 00:08:42.600 "uuid": "059eb06a-c614-4c50-b645-bdd50d700a62", 00:08:42.600 "is_configured": true, 00:08:42.600 "data_offset": 0, 00:08:42.600 "data_size": 65536 00:08:42.600 } 00:08:42.600 ] 00:08:42.600 }' 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.600 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.861 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:42.861 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.861 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.861 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.861 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.861 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.861 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.861 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.861 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.861 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:42.861 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.861 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.861 [2024-10-01 14:33:34.409538] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:42.862 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.862 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.862 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.862 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.862 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.862 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.862 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.862 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.862 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.862 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.862 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:42.862 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.862 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.862 [2024-10-01 14:33:34.507928] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.123 [2024-10-01 14:33:34.605686] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:43.123 [2024-10-01 14:33:34.605737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:43.123 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.124 BaseBdev2 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.124 [ 00:08:43.124 { 00:08:43.124 "name": "BaseBdev2", 00:08:43.124 "aliases": [ 00:08:43.124 "093b878c-d8a8-4ae4-88ae-ef646c7ff459" 00:08:43.124 ], 00:08:43.124 "product_name": "Malloc disk", 00:08:43.124 "block_size": 512, 00:08:43.124 "num_blocks": 65536, 00:08:43.124 "uuid": "093b878c-d8a8-4ae4-88ae-ef646c7ff459", 00:08:43.124 "assigned_rate_limits": { 00:08:43.124 "rw_ios_per_sec": 0, 00:08:43.124 "rw_mbytes_per_sec": 0, 00:08:43.124 "r_mbytes_per_sec": 0, 00:08:43.124 "w_mbytes_per_sec": 0 00:08:43.124 }, 00:08:43.124 "claimed": false, 00:08:43.124 "zoned": false, 00:08:43.124 "supported_io_types": { 00:08:43.124 "read": true, 00:08:43.124 "write": true, 00:08:43.124 "unmap": true, 00:08:43.124 "flush": true, 00:08:43.124 "reset": true, 00:08:43.124 "nvme_admin": false, 00:08:43.124 "nvme_io": false, 00:08:43.124 "nvme_io_md": false, 00:08:43.124 "write_zeroes": true, 00:08:43.124 "zcopy": true, 00:08:43.124 "get_zone_info": false, 00:08:43.124 "zone_management": false, 00:08:43.124 "zone_append": false, 00:08:43.124 "compare": false, 00:08:43.124 "compare_and_write": false, 00:08:43.124 "abort": true, 00:08:43.124 "seek_hole": false, 00:08:43.124 "seek_data": false, 00:08:43.124 "copy": true, 00:08:43.124 "nvme_iov_md": false 00:08:43.124 }, 00:08:43.124 "memory_domains": [ 00:08:43.124 { 00:08:43.124 "dma_device_id": "system", 00:08:43.124 "dma_device_type": 1 00:08:43.124 }, 00:08:43.124 { 00:08:43.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.124 "dma_device_type": 2 00:08:43.124 } 00:08:43.124 ], 00:08:43.124 "driver_specific": {} 00:08:43.124 } 00:08:43.124 ] 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.124 BaseBdev3 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.124 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.386 [ 00:08:43.386 { 00:08:43.386 "name": "BaseBdev3", 00:08:43.386 "aliases": [ 00:08:43.386 "e902ed33-599d-406b-a7db-22fd8c07ae73" 00:08:43.386 ], 00:08:43.386 "product_name": "Malloc disk", 00:08:43.386 "block_size": 512, 00:08:43.386 "num_blocks": 65536, 00:08:43.386 "uuid": "e902ed33-599d-406b-a7db-22fd8c07ae73", 00:08:43.386 "assigned_rate_limits": { 00:08:43.386 "rw_ios_per_sec": 0, 00:08:43.386 "rw_mbytes_per_sec": 0, 00:08:43.386 "r_mbytes_per_sec": 0, 00:08:43.386 "w_mbytes_per_sec": 0 00:08:43.386 }, 00:08:43.386 "claimed": false, 00:08:43.386 "zoned": false, 00:08:43.386 "supported_io_types": { 00:08:43.386 "read": true, 00:08:43.386 "write": true, 00:08:43.386 "unmap": true, 00:08:43.386 "flush": true, 00:08:43.386 "reset": true, 00:08:43.386 "nvme_admin": false, 00:08:43.386 "nvme_io": false, 00:08:43.386 "nvme_io_md": false, 00:08:43.386 "write_zeroes": true, 00:08:43.386 "zcopy": true, 00:08:43.386 "get_zone_info": false, 00:08:43.386 "zone_management": false, 00:08:43.386 "zone_append": false, 00:08:43.386 "compare": false, 00:08:43.386 "compare_and_write": false, 00:08:43.386 "abort": true, 00:08:43.386 "seek_hole": false, 00:08:43.386 "seek_data": false, 00:08:43.386 "copy": true, 00:08:43.386 "nvme_iov_md": false 00:08:43.386 }, 00:08:43.386 "memory_domains": [ 00:08:43.386 { 00:08:43.386 "dma_device_id": "system", 00:08:43.386 "dma_device_type": 1 00:08:43.386 }, 00:08:43.386 { 00:08:43.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.386 "dma_device_type": 2 00:08:43.386 } 00:08:43.386 ], 00:08:43.386 "driver_specific": {} 00:08:43.386 } 00:08:43.386 ] 00:08:43.386 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.386 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:43.386 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:43.386 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:43.386 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:43.386 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.386 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.386 BaseBdev4 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.387 [ 00:08:43.387 { 00:08:43.387 "name": "BaseBdev4", 00:08:43.387 "aliases": [ 00:08:43.387 "1da4404e-7df8-49ec-a485-1675ea8fa0b2" 00:08:43.387 ], 00:08:43.387 "product_name": "Malloc disk", 00:08:43.387 "block_size": 512, 00:08:43.387 "num_blocks": 65536, 00:08:43.387 "uuid": "1da4404e-7df8-49ec-a485-1675ea8fa0b2", 00:08:43.387 "assigned_rate_limits": { 00:08:43.387 "rw_ios_per_sec": 0, 00:08:43.387 "rw_mbytes_per_sec": 0, 00:08:43.387 "r_mbytes_per_sec": 0, 00:08:43.387 "w_mbytes_per_sec": 0 00:08:43.387 }, 00:08:43.387 "claimed": false, 00:08:43.387 "zoned": false, 00:08:43.387 "supported_io_types": { 00:08:43.387 "read": true, 00:08:43.387 "write": true, 00:08:43.387 "unmap": true, 00:08:43.387 "flush": true, 00:08:43.387 "reset": true, 00:08:43.387 "nvme_admin": false, 00:08:43.387 "nvme_io": false, 00:08:43.387 "nvme_io_md": false, 00:08:43.387 "write_zeroes": true, 00:08:43.387 "zcopy": true, 00:08:43.387 "get_zone_info": false, 00:08:43.387 "zone_management": false, 00:08:43.387 "zone_append": false, 00:08:43.387 "compare": false, 00:08:43.387 "compare_and_write": false, 00:08:43.387 "abort": true, 00:08:43.387 "seek_hole": false, 00:08:43.387 "seek_data": false, 00:08:43.387 "copy": true, 00:08:43.387 "nvme_iov_md": false 00:08:43.387 }, 00:08:43.387 "memory_domains": [ 00:08:43.387 { 00:08:43.387 "dma_device_id": "system", 00:08:43.387 "dma_device_type": 1 00:08:43.387 }, 00:08:43.387 { 00:08:43.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.387 "dma_device_type": 2 00:08:43.387 } 00:08:43.387 ], 00:08:43.387 "driver_specific": {} 00:08:43.387 } 00:08:43.387 ] 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.387 [2024-10-01 14:33:34.875753] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.387 [2024-10-01 14:33:34.875889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.387 [2024-10-01 14:33:34.875959] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.387 [2024-10-01 14:33:34.877863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.387 [2024-10-01 14:33:34.877987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.387 "name": "Existed_Raid", 00:08:43.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.387 "strip_size_kb": 64, 00:08:43.387 "state": "configuring", 00:08:43.387 "raid_level": "concat", 00:08:43.387 "superblock": false, 00:08:43.387 "num_base_bdevs": 4, 00:08:43.387 "num_base_bdevs_discovered": 3, 00:08:43.387 "num_base_bdevs_operational": 4, 00:08:43.387 "base_bdevs_list": [ 00:08:43.387 { 00:08:43.387 "name": "BaseBdev1", 00:08:43.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.387 "is_configured": false, 00:08:43.387 "data_offset": 0, 00:08:43.387 "data_size": 0 00:08:43.387 }, 00:08:43.387 { 00:08:43.387 "name": "BaseBdev2", 00:08:43.387 "uuid": "093b878c-d8a8-4ae4-88ae-ef646c7ff459", 00:08:43.387 "is_configured": true, 00:08:43.387 "data_offset": 0, 00:08:43.387 "data_size": 65536 00:08:43.387 }, 00:08:43.387 { 00:08:43.387 "name": "BaseBdev3", 00:08:43.387 "uuid": "e902ed33-599d-406b-a7db-22fd8c07ae73", 00:08:43.387 "is_configured": true, 00:08:43.387 "data_offset": 0, 00:08:43.387 "data_size": 65536 00:08:43.387 }, 00:08:43.387 { 00:08:43.387 "name": "BaseBdev4", 00:08:43.387 "uuid": "1da4404e-7df8-49ec-a485-1675ea8fa0b2", 00:08:43.387 "is_configured": true, 00:08:43.387 "data_offset": 0, 00:08:43.387 "data_size": 65536 00:08:43.387 } 00:08:43.387 ] 00:08:43.387 }' 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.387 14:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.649 [2024-10-01 14:33:35.195830] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.649 "name": "Existed_Raid", 00:08:43.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.649 "strip_size_kb": 64, 00:08:43.649 "state": "configuring", 00:08:43.649 "raid_level": "concat", 00:08:43.649 "superblock": false, 00:08:43.649 "num_base_bdevs": 4, 00:08:43.649 "num_base_bdevs_discovered": 2, 00:08:43.649 "num_base_bdevs_operational": 4, 00:08:43.649 "base_bdevs_list": [ 00:08:43.649 { 00:08:43.649 "name": "BaseBdev1", 00:08:43.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.649 "is_configured": false, 00:08:43.649 "data_offset": 0, 00:08:43.649 "data_size": 0 00:08:43.649 }, 00:08:43.649 { 00:08:43.649 "name": null, 00:08:43.649 "uuid": "093b878c-d8a8-4ae4-88ae-ef646c7ff459", 00:08:43.649 "is_configured": false, 00:08:43.649 "data_offset": 0, 00:08:43.649 "data_size": 65536 00:08:43.649 }, 00:08:43.649 { 00:08:43.649 "name": "BaseBdev3", 00:08:43.649 "uuid": "e902ed33-599d-406b-a7db-22fd8c07ae73", 00:08:43.649 "is_configured": true, 00:08:43.649 "data_offset": 0, 00:08:43.649 "data_size": 65536 00:08:43.649 }, 00:08:43.649 { 00:08:43.649 "name": "BaseBdev4", 00:08:43.649 "uuid": "1da4404e-7df8-49ec-a485-1675ea8fa0b2", 00:08:43.649 "is_configured": true, 00:08:43.649 "data_offset": 0, 00:08:43.649 "data_size": 65536 00:08:43.649 } 00:08:43.649 ] 00:08:43.649 }' 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.649 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.910 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.910 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.910 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.910 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:43.910 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.910 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:43.910 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:43.910 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.910 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.910 [2024-10-01 14:33:35.590306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.910 BaseBdev1 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.172 [ 00:08:44.172 { 00:08:44.172 "name": "BaseBdev1", 00:08:44.172 "aliases": [ 00:08:44.172 "ce29b609-18b5-4c36-ab9a-babb85f8c3f4" 00:08:44.172 ], 00:08:44.172 "product_name": "Malloc disk", 00:08:44.172 "block_size": 512, 00:08:44.172 "num_blocks": 65536, 00:08:44.172 "uuid": "ce29b609-18b5-4c36-ab9a-babb85f8c3f4", 00:08:44.172 "assigned_rate_limits": { 00:08:44.172 "rw_ios_per_sec": 0, 00:08:44.172 "rw_mbytes_per_sec": 0, 00:08:44.172 "r_mbytes_per_sec": 0, 00:08:44.172 "w_mbytes_per_sec": 0 00:08:44.172 }, 00:08:44.172 "claimed": true, 00:08:44.172 "claim_type": "exclusive_write", 00:08:44.172 "zoned": false, 00:08:44.172 "supported_io_types": { 00:08:44.172 "read": true, 00:08:44.172 "write": true, 00:08:44.172 "unmap": true, 00:08:44.172 "flush": true, 00:08:44.172 "reset": true, 00:08:44.172 "nvme_admin": false, 00:08:44.172 "nvme_io": false, 00:08:44.172 "nvme_io_md": false, 00:08:44.172 "write_zeroes": true, 00:08:44.172 "zcopy": true, 00:08:44.172 "get_zone_info": false, 00:08:44.172 "zone_management": false, 00:08:44.172 "zone_append": false, 00:08:44.172 "compare": false, 00:08:44.172 "compare_and_write": false, 00:08:44.172 "abort": true, 00:08:44.172 "seek_hole": false, 00:08:44.172 "seek_data": false, 00:08:44.172 "copy": true, 00:08:44.172 "nvme_iov_md": false 00:08:44.172 }, 00:08:44.172 "memory_domains": [ 00:08:44.172 { 00:08:44.172 "dma_device_id": "system", 00:08:44.172 "dma_device_type": 1 00:08:44.172 }, 00:08:44.172 { 00:08:44.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.172 "dma_device_type": 2 00:08:44.172 } 00:08:44.172 ], 00:08:44.172 "driver_specific": {} 00:08:44.172 } 00:08:44.172 ] 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.172 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.172 "name": "Existed_Raid", 00:08:44.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.172 "strip_size_kb": 64, 00:08:44.172 "state": "configuring", 00:08:44.172 "raid_level": "concat", 00:08:44.172 "superblock": false, 00:08:44.172 "num_base_bdevs": 4, 00:08:44.172 "num_base_bdevs_discovered": 3, 00:08:44.172 "num_base_bdevs_operational": 4, 00:08:44.172 "base_bdevs_list": [ 00:08:44.172 { 00:08:44.172 "name": "BaseBdev1", 00:08:44.172 "uuid": "ce29b609-18b5-4c36-ab9a-babb85f8c3f4", 00:08:44.172 "is_configured": true, 00:08:44.172 "data_offset": 0, 00:08:44.172 "data_size": 65536 00:08:44.172 }, 00:08:44.172 { 00:08:44.172 "name": null, 00:08:44.172 "uuid": "093b878c-d8a8-4ae4-88ae-ef646c7ff459", 00:08:44.172 "is_configured": false, 00:08:44.172 "data_offset": 0, 00:08:44.172 "data_size": 65536 00:08:44.172 }, 00:08:44.172 { 00:08:44.172 "name": "BaseBdev3", 00:08:44.172 "uuid": "e902ed33-599d-406b-a7db-22fd8c07ae73", 00:08:44.172 "is_configured": true, 00:08:44.172 "data_offset": 0, 00:08:44.172 "data_size": 65536 00:08:44.172 }, 00:08:44.172 { 00:08:44.172 "name": "BaseBdev4", 00:08:44.172 "uuid": "1da4404e-7df8-49ec-a485-1675ea8fa0b2", 00:08:44.172 "is_configured": true, 00:08:44.172 "data_offset": 0, 00:08:44.172 "data_size": 65536 00:08:44.172 } 00:08:44.172 ] 00:08:44.172 }' 00:08:44.173 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.173 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.435 [2024-10-01 14:33:35.970457] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.435 14:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.435 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.435 "name": "Existed_Raid", 00:08:44.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.435 "strip_size_kb": 64, 00:08:44.435 "state": "configuring", 00:08:44.435 "raid_level": "concat", 00:08:44.435 "superblock": false, 00:08:44.435 "num_base_bdevs": 4, 00:08:44.435 "num_base_bdevs_discovered": 2, 00:08:44.435 "num_base_bdevs_operational": 4, 00:08:44.435 "base_bdevs_list": [ 00:08:44.435 { 00:08:44.435 "name": "BaseBdev1", 00:08:44.435 "uuid": "ce29b609-18b5-4c36-ab9a-babb85f8c3f4", 00:08:44.435 "is_configured": true, 00:08:44.435 "data_offset": 0, 00:08:44.435 "data_size": 65536 00:08:44.435 }, 00:08:44.435 { 00:08:44.435 "name": null, 00:08:44.435 "uuid": "093b878c-d8a8-4ae4-88ae-ef646c7ff459", 00:08:44.435 "is_configured": false, 00:08:44.435 "data_offset": 0, 00:08:44.435 "data_size": 65536 00:08:44.435 }, 00:08:44.435 { 00:08:44.435 "name": null, 00:08:44.435 "uuid": "e902ed33-599d-406b-a7db-22fd8c07ae73", 00:08:44.435 "is_configured": false, 00:08:44.435 "data_offset": 0, 00:08:44.435 "data_size": 65536 00:08:44.435 }, 00:08:44.435 { 00:08:44.435 "name": "BaseBdev4", 00:08:44.435 "uuid": "1da4404e-7df8-49ec-a485-1675ea8fa0b2", 00:08:44.435 "is_configured": true, 00:08:44.435 "data_offset": 0, 00:08:44.435 "data_size": 65536 00:08:44.435 } 00:08:44.435 ] 00:08:44.435 }' 00:08:44.435 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.435 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.712 [2024-10-01 14:33:36.330561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.712 "name": "Existed_Raid", 00:08:44.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.712 "strip_size_kb": 64, 00:08:44.712 "state": "configuring", 00:08:44.712 "raid_level": "concat", 00:08:44.712 "superblock": false, 00:08:44.712 "num_base_bdevs": 4, 00:08:44.712 "num_base_bdevs_discovered": 3, 00:08:44.712 "num_base_bdevs_operational": 4, 00:08:44.712 "base_bdevs_list": [ 00:08:44.712 { 00:08:44.712 "name": "BaseBdev1", 00:08:44.712 "uuid": "ce29b609-18b5-4c36-ab9a-babb85f8c3f4", 00:08:44.712 "is_configured": true, 00:08:44.712 "data_offset": 0, 00:08:44.712 "data_size": 65536 00:08:44.712 }, 00:08:44.712 { 00:08:44.712 "name": null, 00:08:44.712 "uuid": "093b878c-d8a8-4ae4-88ae-ef646c7ff459", 00:08:44.712 "is_configured": false, 00:08:44.712 "data_offset": 0, 00:08:44.712 "data_size": 65536 00:08:44.712 }, 00:08:44.712 { 00:08:44.712 "name": "BaseBdev3", 00:08:44.712 "uuid": "e902ed33-599d-406b-a7db-22fd8c07ae73", 00:08:44.712 "is_configured": true, 00:08:44.712 "data_offset": 0, 00:08:44.712 "data_size": 65536 00:08:44.712 }, 00:08:44.712 { 00:08:44.712 "name": "BaseBdev4", 00:08:44.712 "uuid": "1da4404e-7df8-49ec-a485-1675ea8fa0b2", 00:08:44.712 "is_configured": true, 00:08:44.712 "data_offset": 0, 00:08:44.712 "data_size": 65536 00:08:44.712 } 00:08:44.712 ] 00:08:44.712 }' 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.712 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.006 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.006 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:45.006 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.006 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.006 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.006 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:45.006 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:45.006 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.006 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.006 [2024-10-01 14:33:36.678642] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.266 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.266 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:45.266 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.266 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.266 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.267 "name": "Existed_Raid", 00:08:45.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.267 "strip_size_kb": 64, 00:08:45.267 "state": "configuring", 00:08:45.267 "raid_level": "concat", 00:08:45.267 "superblock": false, 00:08:45.267 "num_base_bdevs": 4, 00:08:45.267 "num_base_bdevs_discovered": 2, 00:08:45.267 "num_base_bdevs_operational": 4, 00:08:45.267 "base_bdevs_list": [ 00:08:45.267 { 00:08:45.267 "name": null, 00:08:45.267 "uuid": "ce29b609-18b5-4c36-ab9a-babb85f8c3f4", 00:08:45.267 "is_configured": false, 00:08:45.267 "data_offset": 0, 00:08:45.267 "data_size": 65536 00:08:45.267 }, 00:08:45.267 { 00:08:45.267 "name": null, 00:08:45.267 "uuid": "093b878c-d8a8-4ae4-88ae-ef646c7ff459", 00:08:45.267 "is_configured": false, 00:08:45.267 "data_offset": 0, 00:08:45.267 "data_size": 65536 00:08:45.267 }, 00:08:45.267 { 00:08:45.267 "name": "BaseBdev3", 00:08:45.267 "uuid": "e902ed33-599d-406b-a7db-22fd8c07ae73", 00:08:45.267 "is_configured": true, 00:08:45.267 "data_offset": 0, 00:08:45.267 "data_size": 65536 00:08:45.267 }, 00:08:45.267 { 00:08:45.267 "name": "BaseBdev4", 00:08:45.267 "uuid": "1da4404e-7df8-49ec-a485-1675ea8fa0b2", 00:08:45.267 "is_configured": true, 00:08:45.267 "data_offset": 0, 00:08:45.267 "data_size": 65536 00:08:45.267 } 00:08:45.267 ] 00:08:45.267 }' 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.267 14:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.528 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.528 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.529 [2024-10-01 14:33:37.089874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.529 "name": "Existed_Raid", 00:08:45.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.529 "strip_size_kb": 64, 00:08:45.529 "state": "configuring", 00:08:45.529 "raid_level": "concat", 00:08:45.529 "superblock": false, 00:08:45.529 "num_base_bdevs": 4, 00:08:45.529 "num_base_bdevs_discovered": 3, 00:08:45.529 "num_base_bdevs_operational": 4, 00:08:45.529 "base_bdevs_list": [ 00:08:45.529 { 00:08:45.529 "name": null, 00:08:45.529 "uuid": "ce29b609-18b5-4c36-ab9a-babb85f8c3f4", 00:08:45.529 "is_configured": false, 00:08:45.529 "data_offset": 0, 00:08:45.529 "data_size": 65536 00:08:45.529 }, 00:08:45.529 { 00:08:45.529 "name": "BaseBdev2", 00:08:45.529 "uuid": "093b878c-d8a8-4ae4-88ae-ef646c7ff459", 00:08:45.529 "is_configured": true, 00:08:45.529 "data_offset": 0, 00:08:45.529 "data_size": 65536 00:08:45.529 }, 00:08:45.529 { 00:08:45.529 "name": "BaseBdev3", 00:08:45.529 "uuid": "e902ed33-599d-406b-a7db-22fd8c07ae73", 00:08:45.529 "is_configured": true, 00:08:45.529 "data_offset": 0, 00:08:45.529 "data_size": 65536 00:08:45.529 }, 00:08:45.529 { 00:08:45.529 "name": "BaseBdev4", 00:08:45.529 "uuid": "1da4404e-7df8-49ec-a485-1675ea8fa0b2", 00:08:45.529 "is_configured": true, 00:08:45.529 "data_offset": 0, 00:08:45.529 "data_size": 65536 00:08:45.529 } 00:08:45.529 ] 00:08:45.529 }' 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.529 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ce29b609-18b5-4c36-ab9a-babb85f8c3f4 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.791 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.053 [2024-10-01 14:33:37.500135] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:46.053 [2024-10-01 14:33:37.500179] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:46.053 [2024-10-01 14:33:37.500186] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:46.053 [2024-10-01 14:33:37.500436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:46.053 [2024-10-01 14:33:37.500554] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:46.053 [2024-10-01 14:33:37.500564] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:46.053 [2024-10-01 14:33:37.500794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.053 NewBaseBdev 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.053 [ 00:08:46.053 { 00:08:46.053 "name": "NewBaseBdev", 00:08:46.053 "aliases": [ 00:08:46.053 "ce29b609-18b5-4c36-ab9a-babb85f8c3f4" 00:08:46.053 ], 00:08:46.053 "product_name": "Malloc disk", 00:08:46.053 "block_size": 512, 00:08:46.053 "num_blocks": 65536, 00:08:46.053 "uuid": "ce29b609-18b5-4c36-ab9a-babb85f8c3f4", 00:08:46.053 "assigned_rate_limits": { 00:08:46.053 "rw_ios_per_sec": 0, 00:08:46.053 "rw_mbytes_per_sec": 0, 00:08:46.053 "r_mbytes_per_sec": 0, 00:08:46.053 "w_mbytes_per_sec": 0 00:08:46.053 }, 00:08:46.053 "claimed": true, 00:08:46.053 "claim_type": "exclusive_write", 00:08:46.053 "zoned": false, 00:08:46.053 "supported_io_types": { 00:08:46.053 "read": true, 00:08:46.053 "write": true, 00:08:46.053 "unmap": true, 00:08:46.053 "flush": true, 00:08:46.053 "reset": true, 00:08:46.053 "nvme_admin": false, 00:08:46.053 "nvme_io": false, 00:08:46.053 "nvme_io_md": false, 00:08:46.053 "write_zeroes": true, 00:08:46.053 "zcopy": true, 00:08:46.053 "get_zone_info": false, 00:08:46.053 "zone_management": false, 00:08:46.053 "zone_append": false, 00:08:46.053 "compare": false, 00:08:46.053 "compare_and_write": false, 00:08:46.053 "abort": true, 00:08:46.053 "seek_hole": false, 00:08:46.053 "seek_data": false, 00:08:46.053 "copy": true, 00:08:46.053 "nvme_iov_md": false 00:08:46.053 }, 00:08:46.053 "memory_domains": [ 00:08:46.053 { 00:08:46.053 "dma_device_id": "system", 00:08:46.053 "dma_device_type": 1 00:08:46.053 }, 00:08:46.053 { 00:08:46.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.053 "dma_device_type": 2 00:08:46.053 } 00:08:46.053 ], 00:08:46.053 "driver_specific": {} 00:08:46.053 } 00:08:46.053 ] 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:46.053 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.054 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.054 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.054 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.054 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.054 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.054 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.054 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.054 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.054 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.054 "name": "Existed_Raid", 00:08:46.054 "uuid": "e20d98af-2212-4d73-a22d-7cdcb2e5a23a", 00:08:46.054 "strip_size_kb": 64, 00:08:46.054 "state": "online", 00:08:46.054 "raid_level": "concat", 00:08:46.054 "superblock": false, 00:08:46.054 "num_base_bdevs": 4, 00:08:46.054 "num_base_bdevs_discovered": 4, 00:08:46.054 "num_base_bdevs_operational": 4, 00:08:46.054 "base_bdevs_list": [ 00:08:46.054 { 00:08:46.054 "name": "NewBaseBdev", 00:08:46.054 "uuid": "ce29b609-18b5-4c36-ab9a-babb85f8c3f4", 00:08:46.054 "is_configured": true, 00:08:46.054 "data_offset": 0, 00:08:46.054 "data_size": 65536 00:08:46.054 }, 00:08:46.054 { 00:08:46.054 "name": "BaseBdev2", 00:08:46.054 "uuid": "093b878c-d8a8-4ae4-88ae-ef646c7ff459", 00:08:46.054 "is_configured": true, 00:08:46.054 "data_offset": 0, 00:08:46.054 "data_size": 65536 00:08:46.054 }, 00:08:46.054 { 00:08:46.054 "name": "BaseBdev3", 00:08:46.054 "uuid": "e902ed33-599d-406b-a7db-22fd8c07ae73", 00:08:46.054 "is_configured": true, 00:08:46.054 "data_offset": 0, 00:08:46.054 "data_size": 65536 00:08:46.054 }, 00:08:46.054 { 00:08:46.054 "name": "BaseBdev4", 00:08:46.054 "uuid": "1da4404e-7df8-49ec-a485-1675ea8fa0b2", 00:08:46.054 "is_configured": true, 00:08:46.054 "data_offset": 0, 00:08:46.054 "data_size": 65536 00:08:46.054 } 00:08:46.054 ] 00:08:46.054 }' 00:08:46.054 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.054 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.316 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:46.316 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:46.316 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.316 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.316 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.316 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.316 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:46.316 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.316 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.316 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.316 [2024-10-01 14:33:37.876626] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.316 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.316 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.316 "name": "Existed_Raid", 00:08:46.316 "aliases": [ 00:08:46.316 "e20d98af-2212-4d73-a22d-7cdcb2e5a23a" 00:08:46.316 ], 00:08:46.316 "product_name": "Raid Volume", 00:08:46.316 "block_size": 512, 00:08:46.316 "num_blocks": 262144, 00:08:46.316 "uuid": "e20d98af-2212-4d73-a22d-7cdcb2e5a23a", 00:08:46.316 "assigned_rate_limits": { 00:08:46.316 "rw_ios_per_sec": 0, 00:08:46.316 "rw_mbytes_per_sec": 0, 00:08:46.316 "r_mbytes_per_sec": 0, 00:08:46.316 "w_mbytes_per_sec": 0 00:08:46.316 }, 00:08:46.316 "claimed": false, 00:08:46.316 "zoned": false, 00:08:46.316 "supported_io_types": { 00:08:46.316 "read": true, 00:08:46.316 "write": true, 00:08:46.316 "unmap": true, 00:08:46.316 "flush": true, 00:08:46.316 "reset": true, 00:08:46.316 "nvme_admin": false, 00:08:46.316 "nvme_io": false, 00:08:46.316 "nvme_io_md": false, 00:08:46.316 "write_zeroes": true, 00:08:46.316 "zcopy": false, 00:08:46.316 "get_zone_info": false, 00:08:46.316 "zone_management": false, 00:08:46.316 "zone_append": false, 00:08:46.316 "compare": false, 00:08:46.316 "compare_and_write": false, 00:08:46.316 "abort": false, 00:08:46.316 "seek_hole": false, 00:08:46.316 "seek_data": false, 00:08:46.316 "copy": false, 00:08:46.316 "nvme_iov_md": false 00:08:46.316 }, 00:08:46.316 "memory_domains": [ 00:08:46.316 { 00:08:46.316 "dma_device_id": "system", 00:08:46.316 "dma_device_type": 1 00:08:46.316 }, 00:08:46.316 { 00:08:46.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.316 "dma_device_type": 2 00:08:46.316 }, 00:08:46.316 { 00:08:46.316 "dma_device_id": "system", 00:08:46.316 "dma_device_type": 1 00:08:46.316 }, 00:08:46.316 { 00:08:46.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.316 "dma_device_type": 2 00:08:46.316 }, 00:08:46.316 { 00:08:46.316 "dma_device_id": "system", 00:08:46.316 "dma_device_type": 1 00:08:46.316 }, 00:08:46.316 { 00:08:46.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.316 "dma_device_type": 2 00:08:46.316 }, 00:08:46.316 { 00:08:46.316 "dma_device_id": "system", 00:08:46.316 "dma_device_type": 1 00:08:46.316 }, 00:08:46.316 { 00:08:46.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.316 "dma_device_type": 2 00:08:46.316 } 00:08:46.316 ], 00:08:46.316 "driver_specific": { 00:08:46.316 "raid": { 00:08:46.316 "uuid": "e20d98af-2212-4d73-a22d-7cdcb2e5a23a", 00:08:46.316 "strip_size_kb": 64, 00:08:46.316 "state": "online", 00:08:46.316 "raid_level": "concat", 00:08:46.316 "superblock": false, 00:08:46.316 "num_base_bdevs": 4, 00:08:46.316 "num_base_bdevs_discovered": 4, 00:08:46.316 "num_base_bdevs_operational": 4, 00:08:46.316 "base_bdevs_list": [ 00:08:46.316 { 00:08:46.316 "name": "NewBaseBdev", 00:08:46.316 "uuid": "ce29b609-18b5-4c36-ab9a-babb85f8c3f4", 00:08:46.316 "is_configured": true, 00:08:46.317 "data_offset": 0, 00:08:46.317 "data_size": 65536 00:08:46.317 }, 00:08:46.317 { 00:08:46.317 "name": "BaseBdev2", 00:08:46.317 "uuid": "093b878c-d8a8-4ae4-88ae-ef646c7ff459", 00:08:46.317 "is_configured": true, 00:08:46.317 "data_offset": 0, 00:08:46.317 "data_size": 65536 00:08:46.317 }, 00:08:46.317 { 00:08:46.317 "name": "BaseBdev3", 00:08:46.317 "uuid": "e902ed33-599d-406b-a7db-22fd8c07ae73", 00:08:46.317 "is_configured": true, 00:08:46.317 "data_offset": 0, 00:08:46.317 "data_size": 65536 00:08:46.317 }, 00:08:46.317 { 00:08:46.317 "name": "BaseBdev4", 00:08:46.317 "uuid": "1da4404e-7df8-49ec-a485-1675ea8fa0b2", 00:08:46.317 "is_configured": true, 00:08:46.317 "data_offset": 0, 00:08:46.317 "data_size": 65536 00:08:46.317 } 00:08:46.317 ] 00:08:46.317 } 00:08:46.317 } 00:08:46.317 }' 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:46.317 BaseBdev2 00:08:46.317 BaseBdev3 00:08:46.317 BaseBdev4' 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.317 14:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.578 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.578 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.579 [2024-10-01 14:33:38.088313] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.579 [2024-10-01 14:33:38.088427] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.579 [2024-10-01 14:33:38.088503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.579 [2024-10-01 14:33:38.088570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.579 [2024-10-01 14:33:38.088579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69672 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69672 ']' 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69672 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69672 00:08:46.579 killing process with pid 69672 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69672' 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69672 00:08:46.579 [2024-10-01 14:33:38.120368] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.579 14:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69672 00:08:46.839 [2024-10-01 14:33:38.363036] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:47.781 00:08:47.781 real 0m8.521s 00:08:47.781 user 0m13.493s 00:08:47.781 sys 0m1.381s 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.781 ************************************ 00:08:47.781 END TEST raid_state_function_test 00:08:47.781 ************************************ 00:08:47.781 14:33:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:08:47.781 14:33:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:47.781 14:33:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.781 14:33:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.781 ************************************ 00:08:47.781 START TEST raid_state_function_test_sb 00:08:47.781 ************************************ 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:47.781 Process raid pid: 70310 00:08:47.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70310 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70310' 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70310 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 70310 ']' 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.781 14:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:47.781 [2024-10-01 14:33:39.300384] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:47.781 [2024-10-01 14:33:39.300518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.781 [2024-10-01 14:33:39.446176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.043 [2024-10-01 14:33:39.636727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.304 [2024-10-01 14:33:39.777876] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.304 [2024-10-01 14:33:39.777930] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.565 [2024-10-01 14:33:40.164547] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.565 [2024-10-01 14:33:40.164601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.565 [2024-10-01 14:33:40.164611] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.565 [2024-10-01 14:33:40.164621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.565 [2024-10-01 14:33:40.164627] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.565 [2024-10-01 14:33:40.164636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.565 [2024-10-01 14:33:40.164642] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:48.565 [2024-10-01 14:33:40.164652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.565 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.565 "name": "Existed_Raid", 00:08:48.565 "uuid": "617626a6-dc9a-42c1-8195-d24e2b7da518", 00:08:48.566 "strip_size_kb": 64, 00:08:48.566 "state": "configuring", 00:08:48.566 "raid_level": "concat", 00:08:48.566 "superblock": true, 00:08:48.566 "num_base_bdevs": 4, 00:08:48.566 "num_base_bdevs_discovered": 0, 00:08:48.566 "num_base_bdevs_operational": 4, 00:08:48.566 "base_bdevs_list": [ 00:08:48.566 { 00:08:48.566 "name": "BaseBdev1", 00:08:48.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.566 "is_configured": false, 00:08:48.566 "data_offset": 0, 00:08:48.566 "data_size": 0 00:08:48.566 }, 00:08:48.566 { 00:08:48.566 "name": "BaseBdev2", 00:08:48.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.566 "is_configured": false, 00:08:48.566 "data_offset": 0, 00:08:48.566 "data_size": 0 00:08:48.566 }, 00:08:48.566 { 00:08:48.566 "name": "BaseBdev3", 00:08:48.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.566 "is_configured": false, 00:08:48.566 "data_offset": 0, 00:08:48.566 "data_size": 0 00:08:48.566 }, 00:08:48.566 { 00:08:48.566 "name": "BaseBdev4", 00:08:48.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.566 "is_configured": false, 00:08:48.566 "data_offset": 0, 00:08:48.566 "data_size": 0 00:08:48.566 } 00:08:48.566 ] 00:08:48.566 }' 00:08:48.566 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.566 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.827 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.827 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.827 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.827 [2024-10-01 14:33:40.500525] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.827 [2024-10-01 14:33:40.500563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:48.827 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.827 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:48.827 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.827 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.827 [2024-10-01 14:33:40.508568] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.827 [2024-10-01 14:33:40.508686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.827 [2024-10-01 14:33:40.508760] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.827 [2024-10-01 14:33:40.508789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.827 [2024-10-01 14:33:40.508807] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.827 [2024-10-01 14:33:40.508827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.827 [2024-10-01 14:33:40.508846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:48.827 [2024-10-01 14:33:40.508866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.088 [2024-10-01 14:33:40.551976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.088 BaseBdev1 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.088 [ 00:08:49.088 { 00:08:49.088 "name": "BaseBdev1", 00:08:49.088 "aliases": [ 00:08:49.088 "4295f5dc-8fa3-4333-a622-559d8eeaaba7" 00:08:49.088 ], 00:08:49.088 "product_name": "Malloc disk", 00:08:49.088 "block_size": 512, 00:08:49.088 "num_blocks": 65536, 00:08:49.088 "uuid": "4295f5dc-8fa3-4333-a622-559d8eeaaba7", 00:08:49.088 "assigned_rate_limits": { 00:08:49.088 "rw_ios_per_sec": 0, 00:08:49.088 "rw_mbytes_per_sec": 0, 00:08:49.088 "r_mbytes_per_sec": 0, 00:08:49.088 "w_mbytes_per_sec": 0 00:08:49.088 }, 00:08:49.088 "claimed": true, 00:08:49.088 "claim_type": "exclusive_write", 00:08:49.088 "zoned": false, 00:08:49.088 "supported_io_types": { 00:08:49.088 "read": true, 00:08:49.088 "write": true, 00:08:49.088 "unmap": true, 00:08:49.088 "flush": true, 00:08:49.088 "reset": true, 00:08:49.088 "nvme_admin": false, 00:08:49.088 "nvme_io": false, 00:08:49.088 "nvme_io_md": false, 00:08:49.088 "write_zeroes": true, 00:08:49.088 "zcopy": true, 00:08:49.088 "get_zone_info": false, 00:08:49.088 "zone_management": false, 00:08:49.088 "zone_append": false, 00:08:49.088 "compare": false, 00:08:49.088 "compare_and_write": false, 00:08:49.088 "abort": true, 00:08:49.088 "seek_hole": false, 00:08:49.088 "seek_data": false, 00:08:49.088 "copy": true, 00:08:49.088 "nvme_iov_md": false 00:08:49.088 }, 00:08:49.088 "memory_domains": [ 00:08:49.088 { 00:08:49.088 "dma_device_id": "system", 00:08:49.088 "dma_device_type": 1 00:08:49.088 }, 00:08:49.088 { 00:08:49.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.088 "dma_device_type": 2 00:08:49.088 } 00:08:49.088 ], 00:08:49.088 "driver_specific": {} 00:08:49.088 } 00:08:49.088 ] 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:49.088 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.089 "name": "Existed_Raid", 00:08:49.089 "uuid": "53a08d4d-716a-4f2d-a1d5-88ed15040a80", 00:08:49.089 "strip_size_kb": 64, 00:08:49.089 "state": "configuring", 00:08:49.089 "raid_level": "concat", 00:08:49.089 "superblock": true, 00:08:49.089 "num_base_bdevs": 4, 00:08:49.089 "num_base_bdevs_discovered": 1, 00:08:49.089 "num_base_bdevs_operational": 4, 00:08:49.089 "base_bdevs_list": [ 00:08:49.089 { 00:08:49.089 "name": "BaseBdev1", 00:08:49.089 "uuid": "4295f5dc-8fa3-4333-a622-559d8eeaaba7", 00:08:49.089 "is_configured": true, 00:08:49.089 "data_offset": 2048, 00:08:49.089 "data_size": 63488 00:08:49.089 }, 00:08:49.089 { 00:08:49.089 "name": "BaseBdev2", 00:08:49.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.089 "is_configured": false, 00:08:49.089 "data_offset": 0, 00:08:49.089 "data_size": 0 00:08:49.089 }, 00:08:49.089 { 00:08:49.089 "name": "BaseBdev3", 00:08:49.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.089 "is_configured": false, 00:08:49.089 "data_offset": 0, 00:08:49.089 "data_size": 0 00:08:49.089 }, 00:08:49.089 { 00:08:49.089 "name": "BaseBdev4", 00:08:49.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.089 "is_configured": false, 00:08:49.089 "data_offset": 0, 00:08:49.089 "data_size": 0 00:08:49.089 } 00:08:49.089 ] 00:08:49.089 }' 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.089 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.350 [2024-10-01 14:33:40.904102] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.350 [2024-10-01 14:33:40.904148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.350 [2024-10-01 14:33:40.916165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.350 [2024-10-01 14:33:40.918086] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.350 [2024-10-01 14:33:40.918206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.350 [2024-10-01 14:33:40.918262] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:49.350 [2024-10-01 14:33:40.918290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:49.350 [2024-10-01 14:33:40.918310] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:49.350 [2024-10-01 14:33:40.918330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.350 "name": "Existed_Raid", 00:08:49.350 "uuid": "cc83e7a5-c3a9-4830-b690-e94f3066298a", 00:08:49.350 "strip_size_kb": 64, 00:08:49.350 "state": "configuring", 00:08:49.350 "raid_level": "concat", 00:08:49.350 "superblock": true, 00:08:49.350 "num_base_bdevs": 4, 00:08:49.350 "num_base_bdevs_discovered": 1, 00:08:49.350 "num_base_bdevs_operational": 4, 00:08:49.350 "base_bdevs_list": [ 00:08:49.350 { 00:08:49.350 "name": "BaseBdev1", 00:08:49.350 "uuid": "4295f5dc-8fa3-4333-a622-559d8eeaaba7", 00:08:49.350 "is_configured": true, 00:08:49.350 "data_offset": 2048, 00:08:49.350 "data_size": 63488 00:08:49.350 }, 00:08:49.350 { 00:08:49.350 "name": "BaseBdev2", 00:08:49.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.350 "is_configured": false, 00:08:49.350 "data_offset": 0, 00:08:49.350 "data_size": 0 00:08:49.350 }, 00:08:49.350 { 00:08:49.350 "name": "BaseBdev3", 00:08:49.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.350 "is_configured": false, 00:08:49.350 "data_offset": 0, 00:08:49.350 "data_size": 0 00:08:49.350 }, 00:08:49.350 { 00:08:49.350 "name": "BaseBdev4", 00:08:49.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.350 "is_configured": false, 00:08:49.350 "data_offset": 0, 00:08:49.350 "data_size": 0 00:08:49.350 } 00:08:49.350 ] 00:08:49.350 }' 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.350 14:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.611 [2024-10-01 14:33:41.250722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.611 BaseBdev2 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.611 [ 00:08:49.611 { 00:08:49.611 "name": "BaseBdev2", 00:08:49.611 "aliases": [ 00:08:49.611 "32a87b42-be36-4c5e-a186-867154e19516" 00:08:49.611 ], 00:08:49.611 "product_name": "Malloc disk", 00:08:49.611 "block_size": 512, 00:08:49.611 "num_blocks": 65536, 00:08:49.611 "uuid": "32a87b42-be36-4c5e-a186-867154e19516", 00:08:49.611 "assigned_rate_limits": { 00:08:49.611 "rw_ios_per_sec": 0, 00:08:49.611 "rw_mbytes_per_sec": 0, 00:08:49.611 "r_mbytes_per_sec": 0, 00:08:49.611 "w_mbytes_per_sec": 0 00:08:49.611 }, 00:08:49.611 "claimed": true, 00:08:49.611 "claim_type": "exclusive_write", 00:08:49.611 "zoned": false, 00:08:49.611 "supported_io_types": { 00:08:49.611 "read": true, 00:08:49.611 "write": true, 00:08:49.611 "unmap": true, 00:08:49.611 "flush": true, 00:08:49.611 "reset": true, 00:08:49.611 "nvme_admin": false, 00:08:49.611 "nvme_io": false, 00:08:49.611 "nvme_io_md": false, 00:08:49.611 "write_zeroes": true, 00:08:49.611 "zcopy": true, 00:08:49.611 "get_zone_info": false, 00:08:49.611 "zone_management": false, 00:08:49.611 "zone_append": false, 00:08:49.611 "compare": false, 00:08:49.611 "compare_and_write": false, 00:08:49.611 "abort": true, 00:08:49.611 "seek_hole": false, 00:08:49.611 "seek_data": false, 00:08:49.611 "copy": true, 00:08:49.611 "nvme_iov_md": false 00:08:49.611 }, 00:08:49.611 "memory_domains": [ 00:08:49.611 { 00:08:49.611 "dma_device_id": "system", 00:08:49.611 "dma_device_type": 1 00:08:49.611 }, 00:08:49.611 { 00:08:49.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.611 "dma_device_type": 2 00:08:49.611 } 00:08:49.611 ], 00:08:49.611 "driver_specific": {} 00:08:49.611 } 00:08:49.611 ] 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.611 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.612 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:49.612 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.612 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.612 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.612 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.612 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.612 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.612 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.612 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.873 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.873 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.873 "name": "Existed_Raid", 00:08:49.873 "uuid": "cc83e7a5-c3a9-4830-b690-e94f3066298a", 00:08:49.873 "strip_size_kb": 64, 00:08:49.873 "state": "configuring", 00:08:49.873 "raid_level": "concat", 00:08:49.873 "superblock": true, 00:08:49.873 "num_base_bdevs": 4, 00:08:49.873 "num_base_bdevs_discovered": 2, 00:08:49.873 "num_base_bdevs_operational": 4, 00:08:49.873 "base_bdevs_list": [ 00:08:49.873 { 00:08:49.873 "name": "BaseBdev1", 00:08:49.873 "uuid": "4295f5dc-8fa3-4333-a622-559d8eeaaba7", 00:08:49.873 "is_configured": true, 00:08:49.873 "data_offset": 2048, 00:08:49.873 "data_size": 63488 00:08:49.873 }, 00:08:49.873 { 00:08:49.873 "name": "BaseBdev2", 00:08:49.873 "uuid": "32a87b42-be36-4c5e-a186-867154e19516", 00:08:49.873 "is_configured": true, 00:08:49.873 "data_offset": 2048, 00:08:49.873 "data_size": 63488 00:08:49.873 }, 00:08:49.873 { 00:08:49.873 "name": "BaseBdev3", 00:08:49.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.873 "is_configured": false, 00:08:49.873 "data_offset": 0, 00:08:49.873 "data_size": 0 00:08:49.873 }, 00:08:49.873 { 00:08:49.873 "name": "BaseBdev4", 00:08:49.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.873 "is_configured": false, 00:08:49.873 "data_offset": 0, 00:08:49.873 "data_size": 0 00:08:49.873 } 00:08:49.873 ] 00:08:49.873 }' 00:08:49.873 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.873 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.160 [2024-10-01 14:33:41.621732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.160 BaseBdev3 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.160 [ 00:08:50.160 { 00:08:50.160 "name": "BaseBdev3", 00:08:50.160 "aliases": [ 00:08:50.160 "b78b9909-3d11-4d7c-bdb3-d1660e592cd1" 00:08:50.160 ], 00:08:50.160 "product_name": "Malloc disk", 00:08:50.160 "block_size": 512, 00:08:50.160 "num_blocks": 65536, 00:08:50.160 "uuid": "b78b9909-3d11-4d7c-bdb3-d1660e592cd1", 00:08:50.160 "assigned_rate_limits": { 00:08:50.160 "rw_ios_per_sec": 0, 00:08:50.160 "rw_mbytes_per_sec": 0, 00:08:50.160 "r_mbytes_per_sec": 0, 00:08:50.160 "w_mbytes_per_sec": 0 00:08:50.160 }, 00:08:50.160 "claimed": true, 00:08:50.160 "claim_type": "exclusive_write", 00:08:50.160 "zoned": false, 00:08:50.160 "supported_io_types": { 00:08:50.160 "read": true, 00:08:50.160 "write": true, 00:08:50.160 "unmap": true, 00:08:50.160 "flush": true, 00:08:50.160 "reset": true, 00:08:50.160 "nvme_admin": false, 00:08:50.160 "nvme_io": false, 00:08:50.160 "nvme_io_md": false, 00:08:50.160 "write_zeroes": true, 00:08:50.160 "zcopy": true, 00:08:50.160 "get_zone_info": false, 00:08:50.160 "zone_management": false, 00:08:50.160 "zone_append": false, 00:08:50.160 "compare": false, 00:08:50.160 "compare_and_write": false, 00:08:50.160 "abort": true, 00:08:50.160 "seek_hole": false, 00:08:50.160 "seek_data": false, 00:08:50.160 "copy": true, 00:08:50.160 "nvme_iov_md": false 00:08:50.160 }, 00:08:50.160 "memory_domains": [ 00:08:50.160 { 00:08:50.160 "dma_device_id": "system", 00:08:50.160 "dma_device_type": 1 00:08:50.160 }, 00:08:50.160 { 00:08:50.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.160 "dma_device_type": 2 00:08:50.160 } 00:08:50.160 ], 00:08:50.160 "driver_specific": {} 00:08:50.160 } 00:08:50.160 ] 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.160 "name": "Existed_Raid", 00:08:50.160 "uuid": "cc83e7a5-c3a9-4830-b690-e94f3066298a", 00:08:50.160 "strip_size_kb": 64, 00:08:50.160 "state": "configuring", 00:08:50.160 "raid_level": "concat", 00:08:50.160 "superblock": true, 00:08:50.160 "num_base_bdevs": 4, 00:08:50.160 "num_base_bdevs_discovered": 3, 00:08:50.160 "num_base_bdevs_operational": 4, 00:08:50.160 "base_bdevs_list": [ 00:08:50.160 { 00:08:50.160 "name": "BaseBdev1", 00:08:50.160 "uuid": "4295f5dc-8fa3-4333-a622-559d8eeaaba7", 00:08:50.160 "is_configured": true, 00:08:50.160 "data_offset": 2048, 00:08:50.160 "data_size": 63488 00:08:50.160 }, 00:08:50.160 { 00:08:50.160 "name": "BaseBdev2", 00:08:50.160 "uuid": "32a87b42-be36-4c5e-a186-867154e19516", 00:08:50.160 "is_configured": true, 00:08:50.160 "data_offset": 2048, 00:08:50.160 "data_size": 63488 00:08:50.160 }, 00:08:50.160 { 00:08:50.160 "name": "BaseBdev3", 00:08:50.160 "uuid": "b78b9909-3d11-4d7c-bdb3-d1660e592cd1", 00:08:50.160 "is_configured": true, 00:08:50.160 "data_offset": 2048, 00:08:50.160 "data_size": 63488 00:08:50.160 }, 00:08:50.160 { 00:08:50.160 "name": "BaseBdev4", 00:08:50.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.160 "is_configured": false, 00:08:50.160 "data_offset": 0, 00:08:50.160 "data_size": 0 00:08:50.160 } 00:08:50.160 ] 00:08:50.160 }' 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.160 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.421 14:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:50.421 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.421 14:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.421 [2024-10-01 14:33:42.024440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:50.421 [2024-10-01 14:33:42.024659] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:50.421 [2024-10-01 14:33:42.024676] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:50.421 [2024-10-01 14:33:42.024959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:50.421 [2024-10-01 14:33:42.025091] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:50.421 [2024-10-01 14:33:42.025105] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:50.421 [2024-10-01 14:33:42.025226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.421 BaseBdev4 00:08:50.421 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.421 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.422 [ 00:08:50.422 { 00:08:50.422 "name": "BaseBdev4", 00:08:50.422 "aliases": [ 00:08:50.422 "abb3192f-220a-4a5c-9f4c-e3c4049b5634" 00:08:50.422 ], 00:08:50.422 "product_name": "Malloc disk", 00:08:50.422 "block_size": 512, 00:08:50.422 "num_blocks": 65536, 00:08:50.422 "uuid": "abb3192f-220a-4a5c-9f4c-e3c4049b5634", 00:08:50.422 "assigned_rate_limits": { 00:08:50.422 "rw_ios_per_sec": 0, 00:08:50.422 "rw_mbytes_per_sec": 0, 00:08:50.422 "r_mbytes_per_sec": 0, 00:08:50.422 "w_mbytes_per_sec": 0 00:08:50.422 }, 00:08:50.422 "claimed": true, 00:08:50.422 "claim_type": "exclusive_write", 00:08:50.422 "zoned": false, 00:08:50.422 "supported_io_types": { 00:08:50.422 "read": true, 00:08:50.422 "write": true, 00:08:50.422 "unmap": true, 00:08:50.422 "flush": true, 00:08:50.422 "reset": true, 00:08:50.422 "nvme_admin": false, 00:08:50.422 "nvme_io": false, 00:08:50.422 "nvme_io_md": false, 00:08:50.422 "write_zeroes": true, 00:08:50.422 "zcopy": true, 00:08:50.422 "get_zone_info": false, 00:08:50.422 "zone_management": false, 00:08:50.422 "zone_append": false, 00:08:50.422 "compare": false, 00:08:50.422 "compare_and_write": false, 00:08:50.422 "abort": true, 00:08:50.422 "seek_hole": false, 00:08:50.422 "seek_data": false, 00:08:50.422 "copy": true, 00:08:50.422 "nvme_iov_md": false 00:08:50.422 }, 00:08:50.422 "memory_domains": [ 00:08:50.422 { 00:08:50.422 "dma_device_id": "system", 00:08:50.422 "dma_device_type": 1 00:08:50.422 }, 00:08:50.422 { 00:08:50.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.422 "dma_device_type": 2 00:08:50.422 } 00:08:50.422 ], 00:08:50.422 "driver_specific": {} 00:08:50.422 } 00:08:50.422 ] 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.422 "name": "Existed_Raid", 00:08:50.422 "uuid": "cc83e7a5-c3a9-4830-b690-e94f3066298a", 00:08:50.422 "strip_size_kb": 64, 00:08:50.422 "state": "online", 00:08:50.422 "raid_level": "concat", 00:08:50.422 "superblock": true, 00:08:50.422 "num_base_bdevs": 4, 00:08:50.422 "num_base_bdevs_discovered": 4, 00:08:50.422 "num_base_bdevs_operational": 4, 00:08:50.422 "base_bdevs_list": [ 00:08:50.422 { 00:08:50.422 "name": "BaseBdev1", 00:08:50.422 "uuid": "4295f5dc-8fa3-4333-a622-559d8eeaaba7", 00:08:50.422 "is_configured": true, 00:08:50.422 "data_offset": 2048, 00:08:50.422 "data_size": 63488 00:08:50.422 }, 00:08:50.422 { 00:08:50.422 "name": "BaseBdev2", 00:08:50.422 "uuid": "32a87b42-be36-4c5e-a186-867154e19516", 00:08:50.422 "is_configured": true, 00:08:50.422 "data_offset": 2048, 00:08:50.422 "data_size": 63488 00:08:50.422 }, 00:08:50.422 { 00:08:50.422 "name": "BaseBdev3", 00:08:50.422 "uuid": "b78b9909-3d11-4d7c-bdb3-d1660e592cd1", 00:08:50.422 "is_configured": true, 00:08:50.422 "data_offset": 2048, 00:08:50.422 "data_size": 63488 00:08:50.422 }, 00:08:50.422 { 00:08:50.422 "name": "BaseBdev4", 00:08:50.422 "uuid": "abb3192f-220a-4a5c-9f4c-e3c4049b5634", 00:08:50.422 "is_configured": true, 00:08:50.422 "data_offset": 2048, 00:08:50.422 "data_size": 63488 00:08:50.422 } 00:08:50.422 ] 00:08:50.422 }' 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.422 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.683 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:50.683 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:50.683 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.683 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.683 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.683 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.683 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:50.683 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.683 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.944 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.944 [2024-10-01 14:33:42.368939] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.944 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.944 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.944 "name": "Existed_Raid", 00:08:50.944 "aliases": [ 00:08:50.944 "cc83e7a5-c3a9-4830-b690-e94f3066298a" 00:08:50.944 ], 00:08:50.944 "product_name": "Raid Volume", 00:08:50.944 "block_size": 512, 00:08:50.944 "num_blocks": 253952, 00:08:50.944 "uuid": "cc83e7a5-c3a9-4830-b690-e94f3066298a", 00:08:50.944 "assigned_rate_limits": { 00:08:50.944 "rw_ios_per_sec": 0, 00:08:50.944 "rw_mbytes_per_sec": 0, 00:08:50.944 "r_mbytes_per_sec": 0, 00:08:50.944 "w_mbytes_per_sec": 0 00:08:50.944 }, 00:08:50.944 "claimed": false, 00:08:50.944 "zoned": false, 00:08:50.944 "supported_io_types": { 00:08:50.944 "read": true, 00:08:50.944 "write": true, 00:08:50.944 "unmap": true, 00:08:50.944 "flush": true, 00:08:50.944 "reset": true, 00:08:50.944 "nvme_admin": false, 00:08:50.944 "nvme_io": false, 00:08:50.944 "nvme_io_md": false, 00:08:50.944 "write_zeroes": true, 00:08:50.944 "zcopy": false, 00:08:50.944 "get_zone_info": false, 00:08:50.944 "zone_management": false, 00:08:50.944 "zone_append": false, 00:08:50.944 "compare": false, 00:08:50.944 "compare_and_write": false, 00:08:50.944 "abort": false, 00:08:50.944 "seek_hole": false, 00:08:50.944 "seek_data": false, 00:08:50.944 "copy": false, 00:08:50.944 "nvme_iov_md": false 00:08:50.944 }, 00:08:50.944 "memory_domains": [ 00:08:50.944 { 00:08:50.945 "dma_device_id": "system", 00:08:50.945 "dma_device_type": 1 00:08:50.945 }, 00:08:50.945 { 00:08:50.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.945 "dma_device_type": 2 00:08:50.945 }, 00:08:50.945 { 00:08:50.945 "dma_device_id": "system", 00:08:50.945 "dma_device_type": 1 00:08:50.945 }, 00:08:50.945 { 00:08:50.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.945 "dma_device_type": 2 00:08:50.945 }, 00:08:50.945 { 00:08:50.945 "dma_device_id": "system", 00:08:50.945 "dma_device_type": 1 00:08:50.945 }, 00:08:50.945 { 00:08:50.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.945 "dma_device_type": 2 00:08:50.945 }, 00:08:50.945 { 00:08:50.945 "dma_device_id": "system", 00:08:50.945 "dma_device_type": 1 00:08:50.945 }, 00:08:50.945 { 00:08:50.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.945 "dma_device_type": 2 00:08:50.945 } 00:08:50.945 ], 00:08:50.945 "driver_specific": { 00:08:50.945 "raid": { 00:08:50.945 "uuid": "cc83e7a5-c3a9-4830-b690-e94f3066298a", 00:08:50.945 "strip_size_kb": 64, 00:08:50.945 "state": "online", 00:08:50.945 "raid_level": "concat", 00:08:50.945 "superblock": true, 00:08:50.945 "num_base_bdevs": 4, 00:08:50.945 "num_base_bdevs_discovered": 4, 00:08:50.945 "num_base_bdevs_operational": 4, 00:08:50.945 "base_bdevs_list": [ 00:08:50.945 { 00:08:50.945 "name": "BaseBdev1", 00:08:50.945 "uuid": "4295f5dc-8fa3-4333-a622-559d8eeaaba7", 00:08:50.945 "is_configured": true, 00:08:50.945 "data_offset": 2048, 00:08:50.945 "data_size": 63488 00:08:50.945 }, 00:08:50.945 { 00:08:50.945 "name": "BaseBdev2", 00:08:50.945 "uuid": "32a87b42-be36-4c5e-a186-867154e19516", 00:08:50.945 "is_configured": true, 00:08:50.945 "data_offset": 2048, 00:08:50.945 "data_size": 63488 00:08:50.945 }, 00:08:50.945 { 00:08:50.945 "name": "BaseBdev3", 00:08:50.945 "uuid": "b78b9909-3d11-4d7c-bdb3-d1660e592cd1", 00:08:50.945 "is_configured": true, 00:08:50.945 "data_offset": 2048, 00:08:50.945 "data_size": 63488 00:08:50.945 }, 00:08:50.945 { 00:08:50.945 "name": "BaseBdev4", 00:08:50.945 "uuid": "abb3192f-220a-4a5c-9f4c-e3c4049b5634", 00:08:50.945 "is_configured": true, 00:08:50.945 "data_offset": 2048, 00:08:50.945 "data_size": 63488 00:08:50.945 } 00:08:50.945 ] 00:08:50.945 } 00:08:50.945 } 00:08:50.945 }' 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:50.945 BaseBdev2 00:08:50.945 BaseBdev3 00:08:50.945 BaseBdev4' 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.945 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.945 [2024-10-01 14:33:42.620685] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:50.945 [2024-10-01 14:33:42.620725] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.945 [2024-10-01 14:33:42.620774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.205 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.206 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.206 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.206 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.206 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.206 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.206 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.206 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.206 "name": "Existed_Raid", 00:08:51.206 "uuid": "cc83e7a5-c3a9-4830-b690-e94f3066298a", 00:08:51.206 "strip_size_kb": 64, 00:08:51.206 "state": "offline", 00:08:51.206 "raid_level": "concat", 00:08:51.206 "superblock": true, 00:08:51.206 "num_base_bdevs": 4, 00:08:51.206 "num_base_bdevs_discovered": 3, 00:08:51.206 "num_base_bdevs_operational": 3, 00:08:51.206 "base_bdevs_list": [ 00:08:51.206 { 00:08:51.206 "name": null, 00:08:51.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.206 "is_configured": false, 00:08:51.206 "data_offset": 0, 00:08:51.206 "data_size": 63488 00:08:51.206 }, 00:08:51.206 { 00:08:51.206 "name": "BaseBdev2", 00:08:51.206 "uuid": "32a87b42-be36-4c5e-a186-867154e19516", 00:08:51.206 "is_configured": true, 00:08:51.206 "data_offset": 2048, 00:08:51.206 "data_size": 63488 00:08:51.206 }, 00:08:51.206 { 00:08:51.206 "name": "BaseBdev3", 00:08:51.206 "uuid": "b78b9909-3d11-4d7c-bdb3-d1660e592cd1", 00:08:51.206 "is_configured": true, 00:08:51.206 "data_offset": 2048, 00:08:51.206 "data_size": 63488 00:08:51.206 }, 00:08:51.206 { 00:08:51.206 "name": "BaseBdev4", 00:08:51.206 "uuid": "abb3192f-220a-4a5c-9f4c-e3c4049b5634", 00:08:51.206 "is_configured": true, 00:08:51.206 "data_offset": 2048, 00:08:51.206 "data_size": 63488 00:08:51.206 } 00:08:51.206 ] 00:08:51.206 }' 00:08:51.206 14:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.206 14:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.466 [2024-10-01 14:33:43.055960] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.466 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.729 [2024-10-01 14:33:43.150934] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.729 [2024-10-01 14:33:43.249501] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:51.729 [2024-10-01 14:33:43.249557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.729 BaseBdev2 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.729 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.729 [ 00:08:51.729 { 00:08:51.729 "name": "BaseBdev2", 00:08:51.729 "aliases": [ 00:08:51.729 "162d23b1-c48d-4b9d-a93a-51aeb03cd55e" 00:08:51.729 ], 00:08:51.729 "product_name": "Malloc disk", 00:08:51.729 "block_size": 512, 00:08:51.729 "num_blocks": 65536, 00:08:51.729 "uuid": "162d23b1-c48d-4b9d-a93a-51aeb03cd55e", 00:08:51.729 "assigned_rate_limits": { 00:08:51.729 "rw_ios_per_sec": 0, 00:08:51.729 "rw_mbytes_per_sec": 0, 00:08:51.729 "r_mbytes_per_sec": 0, 00:08:51.729 "w_mbytes_per_sec": 0 00:08:51.729 }, 00:08:51.729 "claimed": false, 00:08:51.729 "zoned": false, 00:08:51.729 "supported_io_types": { 00:08:51.729 "read": true, 00:08:51.729 "write": true, 00:08:51.729 "unmap": true, 00:08:51.729 "flush": true, 00:08:51.729 "reset": true, 00:08:51.729 "nvme_admin": false, 00:08:51.729 "nvme_io": false, 00:08:51.729 "nvme_io_md": false, 00:08:51.729 "write_zeroes": true, 00:08:51.729 "zcopy": true, 00:08:51.729 "get_zone_info": false, 00:08:51.729 "zone_management": false, 00:08:51.729 "zone_append": false, 00:08:51.729 "compare": false, 00:08:51.729 "compare_and_write": false, 00:08:51.729 "abort": true, 00:08:51.729 "seek_hole": false, 00:08:51.729 "seek_data": false, 00:08:51.729 "copy": true, 00:08:51.729 "nvme_iov_md": false 00:08:51.729 }, 00:08:51.729 "memory_domains": [ 00:08:51.729 { 00:08:51.729 "dma_device_id": "system", 00:08:51.729 "dma_device_type": 1 00:08:51.729 }, 00:08:51.729 { 00:08:51.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.729 "dma_device_type": 2 00:08:51.729 } 00:08:51.729 ], 00:08:51.729 "driver_specific": {} 00:08:51.729 } 00:08:51.729 ] 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.990 BaseBdev3 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.990 [ 00:08:51.990 { 00:08:51.990 "name": "BaseBdev3", 00:08:51.990 "aliases": [ 00:08:51.990 "7a659de3-4c21-4c5b-a900-156a31bf09da" 00:08:51.990 ], 00:08:51.990 "product_name": "Malloc disk", 00:08:51.990 "block_size": 512, 00:08:51.990 "num_blocks": 65536, 00:08:51.990 "uuid": "7a659de3-4c21-4c5b-a900-156a31bf09da", 00:08:51.990 "assigned_rate_limits": { 00:08:51.990 "rw_ios_per_sec": 0, 00:08:51.990 "rw_mbytes_per_sec": 0, 00:08:51.990 "r_mbytes_per_sec": 0, 00:08:51.990 "w_mbytes_per_sec": 0 00:08:51.990 }, 00:08:51.990 "claimed": false, 00:08:51.990 "zoned": false, 00:08:51.990 "supported_io_types": { 00:08:51.990 "read": true, 00:08:51.990 "write": true, 00:08:51.990 "unmap": true, 00:08:51.990 "flush": true, 00:08:51.990 "reset": true, 00:08:51.990 "nvme_admin": false, 00:08:51.990 "nvme_io": false, 00:08:51.990 "nvme_io_md": false, 00:08:51.990 "write_zeroes": true, 00:08:51.990 "zcopy": true, 00:08:51.990 "get_zone_info": false, 00:08:51.990 "zone_management": false, 00:08:51.990 "zone_append": false, 00:08:51.990 "compare": false, 00:08:51.990 "compare_and_write": false, 00:08:51.990 "abort": true, 00:08:51.990 "seek_hole": false, 00:08:51.990 "seek_data": false, 00:08:51.990 "copy": true, 00:08:51.990 "nvme_iov_md": false 00:08:51.990 }, 00:08:51.990 "memory_domains": [ 00:08:51.990 { 00:08:51.990 "dma_device_id": "system", 00:08:51.990 "dma_device_type": 1 00:08:51.990 }, 00:08:51.990 { 00:08:51.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.990 "dma_device_type": 2 00:08:51.990 } 00:08:51.990 ], 00:08:51.990 "driver_specific": {} 00:08:51.990 } 00:08:51.990 ] 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.990 BaseBdev4 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:08:51.990 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.991 [ 00:08:51.991 { 00:08:51.991 "name": "BaseBdev4", 00:08:51.991 "aliases": [ 00:08:51.991 "408a6fa9-c830-4d99-8574-585a2b294266" 00:08:51.991 ], 00:08:51.991 "product_name": "Malloc disk", 00:08:51.991 "block_size": 512, 00:08:51.991 "num_blocks": 65536, 00:08:51.991 "uuid": "408a6fa9-c830-4d99-8574-585a2b294266", 00:08:51.991 "assigned_rate_limits": { 00:08:51.991 "rw_ios_per_sec": 0, 00:08:51.991 "rw_mbytes_per_sec": 0, 00:08:51.991 "r_mbytes_per_sec": 0, 00:08:51.991 "w_mbytes_per_sec": 0 00:08:51.991 }, 00:08:51.991 "claimed": false, 00:08:51.991 "zoned": false, 00:08:51.991 "supported_io_types": { 00:08:51.991 "read": true, 00:08:51.991 "write": true, 00:08:51.991 "unmap": true, 00:08:51.991 "flush": true, 00:08:51.991 "reset": true, 00:08:51.991 "nvme_admin": false, 00:08:51.991 "nvme_io": false, 00:08:51.991 "nvme_io_md": false, 00:08:51.991 "write_zeroes": true, 00:08:51.991 "zcopy": true, 00:08:51.991 "get_zone_info": false, 00:08:51.991 "zone_management": false, 00:08:51.991 "zone_append": false, 00:08:51.991 "compare": false, 00:08:51.991 "compare_and_write": false, 00:08:51.991 "abort": true, 00:08:51.991 "seek_hole": false, 00:08:51.991 "seek_data": false, 00:08:51.991 "copy": true, 00:08:51.991 "nvme_iov_md": false 00:08:51.991 }, 00:08:51.991 "memory_domains": [ 00:08:51.991 { 00:08:51.991 "dma_device_id": "system", 00:08:51.991 "dma_device_type": 1 00:08:51.991 }, 00:08:51.991 { 00:08:51.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.991 "dma_device_type": 2 00:08:51.991 } 00:08:51.991 ], 00:08:51.991 "driver_specific": {} 00:08:51.991 } 00:08:51.991 ] 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.991 [2024-10-01 14:33:43.528613] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:51.991 [2024-10-01 14:33:43.528773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:51.991 [2024-10-01 14:33:43.528845] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.991 [2024-10-01 14:33:43.530737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.991 [2024-10-01 14:33:43.530865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.991 "name": "Existed_Raid", 00:08:51.991 "uuid": "3c4adf48-11ed-4109-8df0-a6e81c0017b1", 00:08:51.991 "strip_size_kb": 64, 00:08:51.991 "state": "configuring", 00:08:51.991 "raid_level": "concat", 00:08:51.991 "superblock": true, 00:08:51.991 "num_base_bdevs": 4, 00:08:51.991 "num_base_bdevs_discovered": 3, 00:08:51.991 "num_base_bdevs_operational": 4, 00:08:51.991 "base_bdevs_list": [ 00:08:51.991 { 00:08:51.991 "name": "BaseBdev1", 00:08:51.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.991 "is_configured": false, 00:08:51.991 "data_offset": 0, 00:08:51.991 "data_size": 0 00:08:51.991 }, 00:08:51.991 { 00:08:51.991 "name": "BaseBdev2", 00:08:51.991 "uuid": "162d23b1-c48d-4b9d-a93a-51aeb03cd55e", 00:08:51.991 "is_configured": true, 00:08:51.991 "data_offset": 2048, 00:08:51.991 "data_size": 63488 00:08:51.991 }, 00:08:51.991 { 00:08:51.991 "name": "BaseBdev3", 00:08:51.991 "uuid": "7a659de3-4c21-4c5b-a900-156a31bf09da", 00:08:51.991 "is_configured": true, 00:08:51.991 "data_offset": 2048, 00:08:51.991 "data_size": 63488 00:08:51.991 }, 00:08:51.991 { 00:08:51.991 "name": "BaseBdev4", 00:08:51.991 "uuid": "408a6fa9-c830-4d99-8574-585a2b294266", 00:08:51.991 "is_configured": true, 00:08:51.991 "data_offset": 2048, 00:08:51.991 "data_size": 63488 00:08:51.991 } 00:08:51.991 ] 00:08:51.991 }' 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.991 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.252 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.253 [2024-10-01 14:33:43.860650] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.253 "name": "Existed_Raid", 00:08:52.253 "uuid": "3c4adf48-11ed-4109-8df0-a6e81c0017b1", 00:08:52.253 "strip_size_kb": 64, 00:08:52.253 "state": "configuring", 00:08:52.253 "raid_level": "concat", 00:08:52.253 "superblock": true, 00:08:52.253 "num_base_bdevs": 4, 00:08:52.253 "num_base_bdevs_discovered": 2, 00:08:52.253 "num_base_bdevs_operational": 4, 00:08:52.253 "base_bdevs_list": [ 00:08:52.253 { 00:08:52.253 "name": "BaseBdev1", 00:08:52.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.253 "is_configured": false, 00:08:52.253 "data_offset": 0, 00:08:52.253 "data_size": 0 00:08:52.253 }, 00:08:52.253 { 00:08:52.253 "name": null, 00:08:52.253 "uuid": "162d23b1-c48d-4b9d-a93a-51aeb03cd55e", 00:08:52.253 "is_configured": false, 00:08:52.253 "data_offset": 0, 00:08:52.253 "data_size": 63488 00:08:52.253 }, 00:08:52.253 { 00:08:52.253 "name": "BaseBdev3", 00:08:52.253 "uuid": "7a659de3-4c21-4c5b-a900-156a31bf09da", 00:08:52.253 "is_configured": true, 00:08:52.253 "data_offset": 2048, 00:08:52.253 "data_size": 63488 00:08:52.253 }, 00:08:52.253 { 00:08:52.253 "name": "BaseBdev4", 00:08:52.253 "uuid": "408a6fa9-c830-4d99-8574-585a2b294266", 00:08:52.253 "is_configured": true, 00:08:52.253 "data_offset": 2048, 00:08:52.253 "data_size": 63488 00:08:52.253 } 00:08:52.253 ] 00:08:52.253 }' 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.253 14:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.515 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.515 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:52.515 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.515 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.775 [2024-10-01 14:33:44.247122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.775 BaseBdev1 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.775 [ 00:08:52.775 { 00:08:52.775 "name": "BaseBdev1", 00:08:52.775 "aliases": [ 00:08:52.775 "04611de7-a4ea-47cb-9a75-bd2dd584a00a" 00:08:52.775 ], 00:08:52.775 "product_name": "Malloc disk", 00:08:52.775 "block_size": 512, 00:08:52.775 "num_blocks": 65536, 00:08:52.775 "uuid": "04611de7-a4ea-47cb-9a75-bd2dd584a00a", 00:08:52.775 "assigned_rate_limits": { 00:08:52.775 "rw_ios_per_sec": 0, 00:08:52.775 "rw_mbytes_per_sec": 0, 00:08:52.775 "r_mbytes_per_sec": 0, 00:08:52.775 "w_mbytes_per_sec": 0 00:08:52.775 }, 00:08:52.775 "claimed": true, 00:08:52.775 "claim_type": "exclusive_write", 00:08:52.775 "zoned": false, 00:08:52.775 "supported_io_types": { 00:08:52.775 "read": true, 00:08:52.775 "write": true, 00:08:52.775 "unmap": true, 00:08:52.775 "flush": true, 00:08:52.775 "reset": true, 00:08:52.775 "nvme_admin": false, 00:08:52.775 "nvme_io": false, 00:08:52.775 "nvme_io_md": false, 00:08:52.775 "write_zeroes": true, 00:08:52.775 "zcopy": true, 00:08:52.775 "get_zone_info": false, 00:08:52.775 "zone_management": false, 00:08:52.775 "zone_append": false, 00:08:52.775 "compare": false, 00:08:52.775 "compare_and_write": false, 00:08:52.775 "abort": true, 00:08:52.775 "seek_hole": false, 00:08:52.775 "seek_data": false, 00:08:52.775 "copy": true, 00:08:52.775 "nvme_iov_md": false 00:08:52.775 }, 00:08:52.775 "memory_domains": [ 00:08:52.775 { 00:08:52.775 "dma_device_id": "system", 00:08:52.775 "dma_device_type": 1 00:08:52.775 }, 00:08:52.775 { 00:08:52.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.775 "dma_device_type": 2 00:08:52.775 } 00:08:52.775 ], 00:08:52.775 "driver_specific": {} 00:08:52.775 } 00:08:52.775 ] 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.775 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.776 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.776 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.776 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.776 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.776 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.776 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.776 "name": "Existed_Raid", 00:08:52.776 "uuid": "3c4adf48-11ed-4109-8df0-a6e81c0017b1", 00:08:52.776 "strip_size_kb": 64, 00:08:52.776 "state": "configuring", 00:08:52.776 "raid_level": "concat", 00:08:52.776 "superblock": true, 00:08:52.776 "num_base_bdevs": 4, 00:08:52.776 "num_base_bdevs_discovered": 3, 00:08:52.776 "num_base_bdevs_operational": 4, 00:08:52.776 "base_bdevs_list": [ 00:08:52.776 { 00:08:52.776 "name": "BaseBdev1", 00:08:52.776 "uuid": "04611de7-a4ea-47cb-9a75-bd2dd584a00a", 00:08:52.776 "is_configured": true, 00:08:52.776 "data_offset": 2048, 00:08:52.776 "data_size": 63488 00:08:52.776 }, 00:08:52.776 { 00:08:52.776 "name": null, 00:08:52.776 "uuid": "162d23b1-c48d-4b9d-a93a-51aeb03cd55e", 00:08:52.776 "is_configured": false, 00:08:52.776 "data_offset": 0, 00:08:52.776 "data_size": 63488 00:08:52.776 }, 00:08:52.776 { 00:08:52.776 "name": "BaseBdev3", 00:08:52.776 "uuid": "7a659de3-4c21-4c5b-a900-156a31bf09da", 00:08:52.776 "is_configured": true, 00:08:52.776 "data_offset": 2048, 00:08:52.776 "data_size": 63488 00:08:52.776 }, 00:08:52.776 { 00:08:52.776 "name": "BaseBdev4", 00:08:52.776 "uuid": "408a6fa9-c830-4d99-8574-585a2b294266", 00:08:52.776 "is_configured": true, 00:08:52.776 "data_offset": 2048, 00:08:52.776 "data_size": 63488 00:08:52.776 } 00:08:52.776 ] 00:08:52.776 }' 00:08:52.776 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.776 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.037 [2024-10-01 14:33:44.623285] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.037 "name": "Existed_Raid", 00:08:53.037 "uuid": "3c4adf48-11ed-4109-8df0-a6e81c0017b1", 00:08:53.037 "strip_size_kb": 64, 00:08:53.037 "state": "configuring", 00:08:53.037 "raid_level": "concat", 00:08:53.037 "superblock": true, 00:08:53.037 "num_base_bdevs": 4, 00:08:53.037 "num_base_bdevs_discovered": 2, 00:08:53.037 "num_base_bdevs_operational": 4, 00:08:53.037 "base_bdevs_list": [ 00:08:53.037 { 00:08:53.037 "name": "BaseBdev1", 00:08:53.037 "uuid": "04611de7-a4ea-47cb-9a75-bd2dd584a00a", 00:08:53.037 "is_configured": true, 00:08:53.037 "data_offset": 2048, 00:08:53.037 "data_size": 63488 00:08:53.037 }, 00:08:53.037 { 00:08:53.037 "name": null, 00:08:53.037 "uuid": "162d23b1-c48d-4b9d-a93a-51aeb03cd55e", 00:08:53.037 "is_configured": false, 00:08:53.037 "data_offset": 0, 00:08:53.037 "data_size": 63488 00:08:53.037 }, 00:08:53.037 { 00:08:53.037 "name": null, 00:08:53.037 "uuid": "7a659de3-4c21-4c5b-a900-156a31bf09da", 00:08:53.037 "is_configured": false, 00:08:53.037 "data_offset": 0, 00:08:53.037 "data_size": 63488 00:08:53.037 }, 00:08:53.037 { 00:08:53.037 "name": "BaseBdev4", 00:08:53.037 "uuid": "408a6fa9-c830-4d99-8574-585a2b294266", 00:08:53.037 "is_configured": true, 00:08:53.037 "data_offset": 2048, 00:08:53.037 "data_size": 63488 00:08:53.037 } 00:08:53.037 ] 00:08:53.037 }' 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.037 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.298 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.298 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:53.298 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.298 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.298 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.559 [2024-10-01 14:33:44.983380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.559 14:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.559 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.559 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.559 "name": "Existed_Raid", 00:08:53.559 "uuid": "3c4adf48-11ed-4109-8df0-a6e81c0017b1", 00:08:53.559 "strip_size_kb": 64, 00:08:53.559 "state": "configuring", 00:08:53.559 "raid_level": "concat", 00:08:53.559 "superblock": true, 00:08:53.559 "num_base_bdevs": 4, 00:08:53.559 "num_base_bdevs_discovered": 3, 00:08:53.559 "num_base_bdevs_operational": 4, 00:08:53.559 "base_bdevs_list": [ 00:08:53.559 { 00:08:53.559 "name": "BaseBdev1", 00:08:53.559 "uuid": "04611de7-a4ea-47cb-9a75-bd2dd584a00a", 00:08:53.559 "is_configured": true, 00:08:53.559 "data_offset": 2048, 00:08:53.559 "data_size": 63488 00:08:53.559 }, 00:08:53.559 { 00:08:53.559 "name": null, 00:08:53.559 "uuid": "162d23b1-c48d-4b9d-a93a-51aeb03cd55e", 00:08:53.559 "is_configured": false, 00:08:53.559 "data_offset": 0, 00:08:53.559 "data_size": 63488 00:08:53.559 }, 00:08:53.559 { 00:08:53.559 "name": "BaseBdev3", 00:08:53.559 "uuid": "7a659de3-4c21-4c5b-a900-156a31bf09da", 00:08:53.559 "is_configured": true, 00:08:53.559 "data_offset": 2048, 00:08:53.559 "data_size": 63488 00:08:53.559 }, 00:08:53.559 { 00:08:53.559 "name": "BaseBdev4", 00:08:53.559 "uuid": "408a6fa9-c830-4d99-8574-585a2b294266", 00:08:53.559 "is_configured": true, 00:08:53.559 "data_offset": 2048, 00:08:53.559 "data_size": 63488 00:08:53.559 } 00:08:53.559 ] 00:08:53.559 }' 00:08:53.559 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.559 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.819 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:53.819 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.819 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.819 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.819 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.819 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:53.819 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:53.819 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.819 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.819 [2024-10-01 14:33:45.339466] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:53.819 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.819 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:53.819 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.820 "name": "Existed_Raid", 00:08:53.820 "uuid": "3c4adf48-11ed-4109-8df0-a6e81c0017b1", 00:08:53.820 "strip_size_kb": 64, 00:08:53.820 "state": "configuring", 00:08:53.820 "raid_level": "concat", 00:08:53.820 "superblock": true, 00:08:53.820 "num_base_bdevs": 4, 00:08:53.820 "num_base_bdevs_discovered": 2, 00:08:53.820 "num_base_bdevs_operational": 4, 00:08:53.820 "base_bdevs_list": [ 00:08:53.820 { 00:08:53.820 "name": null, 00:08:53.820 "uuid": "04611de7-a4ea-47cb-9a75-bd2dd584a00a", 00:08:53.820 "is_configured": false, 00:08:53.820 "data_offset": 0, 00:08:53.820 "data_size": 63488 00:08:53.820 }, 00:08:53.820 { 00:08:53.820 "name": null, 00:08:53.820 "uuid": "162d23b1-c48d-4b9d-a93a-51aeb03cd55e", 00:08:53.820 "is_configured": false, 00:08:53.820 "data_offset": 0, 00:08:53.820 "data_size": 63488 00:08:53.820 }, 00:08:53.820 { 00:08:53.820 "name": "BaseBdev3", 00:08:53.820 "uuid": "7a659de3-4c21-4c5b-a900-156a31bf09da", 00:08:53.820 "is_configured": true, 00:08:53.820 "data_offset": 2048, 00:08:53.820 "data_size": 63488 00:08:53.820 }, 00:08:53.820 { 00:08:53.820 "name": "BaseBdev4", 00:08:53.820 "uuid": "408a6fa9-c830-4d99-8574-585a2b294266", 00:08:53.820 "is_configured": true, 00:08:53.820 "data_offset": 2048, 00:08:53.820 "data_size": 63488 00:08:53.820 } 00:08:53.820 ] 00:08:53.820 }' 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.820 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.080 [2024-10-01 14:33:45.753477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.080 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.342 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.342 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.342 "name": "Existed_Raid", 00:08:54.342 "uuid": "3c4adf48-11ed-4109-8df0-a6e81c0017b1", 00:08:54.342 "strip_size_kb": 64, 00:08:54.342 "state": "configuring", 00:08:54.342 "raid_level": "concat", 00:08:54.342 "superblock": true, 00:08:54.342 "num_base_bdevs": 4, 00:08:54.342 "num_base_bdevs_discovered": 3, 00:08:54.342 "num_base_bdevs_operational": 4, 00:08:54.342 "base_bdevs_list": [ 00:08:54.342 { 00:08:54.342 "name": null, 00:08:54.342 "uuid": "04611de7-a4ea-47cb-9a75-bd2dd584a00a", 00:08:54.342 "is_configured": false, 00:08:54.342 "data_offset": 0, 00:08:54.342 "data_size": 63488 00:08:54.342 }, 00:08:54.342 { 00:08:54.342 "name": "BaseBdev2", 00:08:54.342 "uuid": "162d23b1-c48d-4b9d-a93a-51aeb03cd55e", 00:08:54.342 "is_configured": true, 00:08:54.342 "data_offset": 2048, 00:08:54.342 "data_size": 63488 00:08:54.342 }, 00:08:54.342 { 00:08:54.342 "name": "BaseBdev3", 00:08:54.342 "uuid": "7a659de3-4c21-4c5b-a900-156a31bf09da", 00:08:54.342 "is_configured": true, 00:08:54.342 "data_offset": 2048, 00:08:54.342 "data_size": 63488 00:08:54.342 }, 00:08:54.342 { 00:08:54.342 "name": "BaseBdev4", 00:08:54.342 "uuid": "408a6fa9-c830-4d99-8574-585a2b294266", 00:08:54.342 "is_configured": true, 00:08:54.342 "data_offset": 2048, 00:08:54.342 "data_size": 63488 00:08:54.342 } 00:08:54.342 ] 00:08:54.342 }' 00:08:54.342 14:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.342 14:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 04611de7-a4ea-47cb-9a75-bd2dd584a00a 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.625 NewBaseBdev 00:08:54.625 [2024-10-01 14:33:46.147832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:54.625 [2024-10-01 14:33:46.148019] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:54.625 [2024-10-01 14:33:46.148031] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:54.625 [2024-10-01 14:33:46.148276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:54.625 [2024-10-01 14:33:46.148392] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:54.625 [2024-10-01 14:33:46.148402] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:54.625 [2024-10-01 14:33:46.148510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.625 [ 00:08:54.625 { 00:08:54.625 "name": "NewBaseBdev", 00:08:54.625 "aliases": [ 00:08:54.625 "04611de7-a4ea-47cb-9a75-bd2dd584a00a" 00:08:54.625 ], 00:08:54.625 "product_name": "Malloc disk", 00:08:54.625 "block_size": 512, 00:08:54.625 "num_blocks": 65536, 00:08:54.625 "uuid": "04611de7-a4ea-47cb-9a75-bd2dd584a00a", 00:08:54.625 "assigned_rate_limits": { 00:08:54.625 "rw_ios_per_sec": 0, 00:08:54.625 "rw_mbytes_per_sec": 0, 00:08:54.625 "r_mbytes_per_sec": 0, 00:08:54.625 "w_mbytes_per_sec": 0 00:08:54.625 }, 00:08:54.625 "claimed": true, 00:08:54.625 "claim_type": "exclusive_write", 00:08:54.625 "zoned": false, 00:08:54.625 "supported_io_types": { 00:08:54.625 "read": true, 00:08:54.625 "write": true, 00:08:54.625 "unmap": true, 00:08:54.625 "flush": true, 00:08:54.625 "reset": true, 00:08:54.625 "nvme_admin": false, 00:08:54.625 "nvme_io": false, 00:08:54.625 "nvme_io_md": false, 00:08:54.625 "write_zeroes": true, 00:08:54.625 "zcopy": true, 00:08:54.625 "get_zone_info": false, 00:08:54.625 "zone_management": false, 00:08:54.625 "zone_append": false, 00:08:54.625 "compare": false, 00:08:54.625 "compare_and_write": false, 00:08:54.625 "abort": true, 00:08:54.625 "seek_hole": false, 00:08:54.625 "seek_data": false, 00:08:54.625 "copy": true, 00:08:54.625 "nvme_iov_md": false 00:08:54.625 }, 00:08:54.625 "memory_domains": [ 00:08:54.625 { 00:08:54.625 "dma_device_id": "system", 00:08:54.625 "dma_device_type": 1 00:08:54.625 }, 00:08:54.625 { 00:08:54.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.625 "dma_device_type": 2 00:08:54.625 } 00:08:54.625 ], 00:08:54.625 "driver_specific": {} 00:08:54.625 } 00:08:54.625 ] 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.625 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.625 "name": "Existed_Raid", 00:08:54.625 "uuid": "3c4adf48-11ed-4109-8df0-a6e81c0017b1", 00:08:54.625 "strip_size_kb": 64, 00:08:54.625 "state": "online", 00:08:54.625 "raid_level": "concat", 00:08:54.625 "superblock": true, 00:08:54.625 "num_base_bdevs": 4, 00:08:54.625 "num_base_bdevs_discovered": 4, 00:08:54.625 "num_base_bdevs_operational": 4, 00:08:54.625 "base_bdevs_list": [ 00:08:54.625 { 00:08:54.625 "name": "NewBaseBdev", 00:08:54.625 "uuid": "04611de7-a4ea-47cb-9a75-bd2dd584a00a", 00:08:54.625 "is_configured": true, 00:08:54.625 "data_offset": 2048, 00:08:54.625 "data_size": 63488 00:08:54.625 }, 00:08:54.625 { 00:08:54.625 "name": "BaseBdev2", 00:08:54.625 "uuid": "162d23b1-c48d-4b9d-a93a-51aeb03cd55e", 00:08:54.625 "is_configured": true, 00:08:54.625 "data_offset": 2048, 00:08:54.625 "data_size": 63488 00:08:54.625 }, 00:08:54.625 { 00:08:54.625 "name": "BaseBdev3", 00:08:54.625 "uuid": "7a659de3-4c21-4c5b-a900-156a31bf09da", 00:08:54.625 "is_configured": true, 00:08:54.625 "data_offset": 2048, 00:08:54.625 "data_size": 63488 00:08:54.625 }, 00:08:54.625 { 00:08:54.625 "name": "BaseBdev4", 00:08:54.625 "uuid": "408a6fa9-c830-4d99-8574-585a2b294266", 00:08:54.625 "is_configured": true, 00:08:54.625 "data_offset": 2048, 00:08:54.626 "data_size": 63488 00:08:54.626 } 00:08:54.626 ] 00:08:54.626 }' 00:08:54.626 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.626 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.905 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:54.905 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:54.905 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.905 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.905 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.905 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.905 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.906 [2024-10-01 14:33:46.504328] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.906 "name": "Existed_Raid", 00:08:54.906 "aliases": [ 00:08:54.906 "3c4adf48-11ed-4109-8df0-a6e81c0017b1" 00:08:54.906 ], 00:08:54.906 "product_name": "Raid Volume", 00:08:54.906 "block_size": 512, 00:08:54.906 "num_blocks": 253952, 00:08:54.906 "uuid": "3c4adf48-11ed-4109-8df0-a6e81c0017b1", 00:08:54.906 "assigned_rate_limits": { 00:08:54.906 "rw_ios_per_sec": 0, 00:08:54.906 "rw_mbytes_per_sec": 0, 00:08:54.906 "r_mbytes_per_sec": 0, 00:08:54.906 "w_mbytes_per_sec": 0 00:08:54.906 }, 00:08:54.906 "claimed": false, 00:08:54.906 "zoned": false, 00:08:54.906 "supported_io_types": { 00:08:54.906 "read": true, 00:08:54.906 "write": true, 00:08:54.906 "unmap": true, 00:08:54.906 "flush": true, 00:08:54.906 "reset": true, 00:08:54.906 "nvme_admin": false, 00:08:54.906 "nvme_io": false, 00:08:54.906 "nvme_io_md": false, 00:08:54.906 "write_zeroes": true, 00:08:54.906 "zcopy": false, 00:08:54.906 "get_zone_info": false, 00:08:54.906 "zone_management": false, 00:08:54.906 "zone_append": false, 00:08:54.906 "compare": false, 00:08:54.906 "compare_and_write": false, 00:08:54.906 "abort": false, 00:08:54.906 "seek_hole": false, 00:08:54.906 "seek_data": false, 00:08:54.906 "copy": false, 00:08:54.906 "nvme_iov_md": false 00:08:54.906 }, 00:08:54.906 "memory_domains": [ 00:08:54.906 { 00:08:54.906 "dma_device_id": "system", 00:08:54.906 "dma_device_type": 1 00:08:54.906 }, 00:08:54.906 { 00:08:54.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.906 "dma_device_type": 2 00:08:54.906 }, 00:08:54.906 { 00:08:54.906 "dma_device_id": "system", 00:08:54.906 "dma_device_type": 1 00:08:54.906 }, 00:08:54.906 { 00:08:54.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.906 "dma_device_type": 2 00:08:54.906 }, 00:08:54.906 { 00:08:54.906 "dma_device_id": "system", 00:08:54.906 "dma_device_type": 1 00:08:54.906 }, 00:08:54.906 { 00:08:54.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.906 "dma_device_type": 2 00:08:54.906 }, 00:08:54.906 { 00:08:54.906 "dma_device_id": "system", 00:08:54.906 "dma_device_type": 1 00:08:54.906 }, 00:08:54.906 { 00:08:54.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.906 "dma_device_type": 2 00:08:54.906 } 00:08:54.906 ], 00:08:54.906 "driver_specific": { 00:08:54.906 "raid": { 00:08:54.906 "uuid": "3c4adf48-11ed-4109-8df0-a6e81c0017b1", 00:08:54.906 "strip_size_kb": 64, 00:08:54.906 "state": "online", 00:08:54.906 "raid_level": "concat", 00:08:54.906 "superblock": true, 00:08:54.906 "num_base_bdevs": 4, 00:08:54.906 "num_base_bdevs_discovered": 4, 00:08:54.906 "num_base_bdevs_operational": 4, 00:08:54.906 "base_bdevs_list": [ 00:08:54.906 { 00:08:54.906 "name": "NewBaseBdev", 00:08:54.906 "uuid": "04611de7-a4ea-47cb-9a75-bd2dd584a00a", 00:08:54.906 "is_configured": true, 00:08:54.906 "data_offset": 2048, 00:08:54.906 "data_size": 63488 00:08:54.906 }, 00:08:54.906 { 00:08:54.906 "name": "BaseBdev2", 00:08:54.906 "uuid": "162d23b1-c48d-4b9d-a93a-51aeb03cd55e", 00:08:54.906 "is_configured": true, 00:08:54.906 "data_offset": 2048, 00:08:54.906 "data_size": 63488 00:08:54.906 }, 00:08:54.906 { 00:08:54.906 "name": "BaseBdev3", 00:08:54.906 "uuid": "7a659de3-4c21-4c5b-a900-156a31bf09da", 00:08:54.906 "is_configured": true, 00:08:54.906 "data_offset": 2048, 00:08:54.906 "data_size": 63488 00:08:54.906 }, 00:08:54.906 { 00:08:54.906 "name": "BaseBdev4", 00:08:54.906 "uuid": "408a6fa9-c830-4d99-8574-585a2b294266", 00:08:54.906 "is_configured": true, 00:08:54.906 "data_offset": 2048, 00:08:54.906 "data_size": 63488 00:08:54.906 } 00:08:54.906 ] 00:08:54.906 } 00:08:54.906 } 00:08:54.906 }' 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:54.906 BaseBdev2 00:08:54.906 BaseBdev3 00:08:54.906 BaseBdev4' 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.906 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.167 [2024-10-01 14:33:46.716014] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.167 [2024-10-01 14:33:46.716134] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.167 [2024-10-01 14:33:46.716252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.167 [2024-10-01 14:33:46.716339] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.167 [2024-10-01 14:33:46.716374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70310 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 70310 ']' 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 70310 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70310 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70310' 00:08:55.167 killing process with pid 70310 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 70310 00:08:55.167 [2024-10-01 14:33:46.744460] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.167 14:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 70310 00:08:55.428 [2024-10-01 14:33:46.986562] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.373 ************************************ 00:08:56.373 END TEST raid_state_function_test_sb 00:08:56.373 ************************************ 00:08:56.373 14:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:56.373 00:08:56.373 real 0m8.574s 00:08:56.373 user 0m13.635s 00:08:56.373 sys 0m1.329s 00:08:56.373 14:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.373 14:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.373 14:33:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:08:56.373 14:33:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:56.373 14:33:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.373 14:33:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.374 ************************************ 00:08:56.374 START TEST raid_superblock_test 00:08:56.374 ************************************ 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:56.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70948 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70948 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 70948 ']' 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.374 14:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:56.374 [2024-10-01 14:33:47.950363] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:56.374 [2024-10-01 14:33:47.950763] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70948 ] 00:08:56.632 [2024-10-01 14:33:48.120563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.632 [2024-10-01 14:33:48.306725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.891 [2024-10-01 14:33:48.442282] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.891 [2024-10-01 14:33:48.442320] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.148 malloc1 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.148 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.148 [2024-10-01 14:33:48.826582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:57.149 [2024-10-01 14:33:48.826780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.149 [2024-10-01 14:33:48.826825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:57.149 [2024-10-01 14:33:48.827259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.149 [2024-10-01 14:33:48.829491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.149 [2024-10-01 14:33:48.829605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:57.149 pt1 00:08:57.149 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.149 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.149 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.149 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.407 malloc2 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.407 [2024-10-01 14:33:48.874020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:57.407 [2024-10-01 14:33:48.874072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.407 [2024-10-01 14:33:48.874093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:57.407 [2024-10-01 14:33:48.874102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.407 [2024-10-01 14:33:48.876156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.407 [2024-10-01 14:33:48.876188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:57.407 pt2 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.407 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.408 malloc3 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.408 [2024-10-01 14:33:48.913497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:57.408 [2024-10-01 14:33:48.913657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.408 [2024-10-01 14:33:48.913683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:57.408 [2024-10-01 14:33:48.913692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.408 [2024-10-01 14:33:48.915807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.408 [2024-10-01 14:33:48.915839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:57.408 pt3 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.408 malloc4 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.408 [2024-10-01 14:33:48.957643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:08:57.408 [2024-10-01 14:33:48.957691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.408 [2024-10-01 14:33:48.957716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:57.408 [2024-10-01 14:33:48.957725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.408 [2024-10-01 14:33:48.959785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.408 [2024-10-01 14:33:48.959906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:08:57.408 pt4 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.408 [2024-10-01 14:33:48.965721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:57.408 [2024-10-01 14:33:48.967518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:57.408 [2024-10-01 14:33:48.967675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:57.408 [2024-10-01 14:33:48.967754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:08:57.408 [2024-10-01 14:33:48.967934] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:57.408 [2024-10-01 14:33:48.967949] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:57.408 [2024-10-01 14:33:48.968205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:57.408 [2024-10-01 14:33:48.968341] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:57.408 [2024-10-01 14:33:48.968352] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:57.408 [2024-10-01 14:33:48.968487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.408 14:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.408 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.408 "name": "raid_bdev1", 00:08:57.408 "uuid": "9c59d79e-5292-4173-b44d-d84b268a23ae", 00:08:57.408 "strip_size_kb": 64, 00:08:57.408 "state": "online", 00:08:57.408 "raid_level": "concat", 00:08:57.408 "superblock": true, 00:08:57.408 "num_base_bdevs": 4, 00:08:57.408 "num_base_bdevs_discovered": 4, 00:08:57.408 "num_base_bdevs_operational": 4, 00:08:57.408 "base_bdevs_list": [ 00:08:57.408 { 00:08:57.408 "name": "pt1", 00:08:57.408 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.408 "is_configured": true, 00:08:57.408 "data_offset": 2048, 00:08:57.408 "data_size": 63488 00:08:57.408 }, 00:08:57.408 { 00:08:57.408 "name": "pt2", 00:08:57.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.408 "is_configured": true, 00:08:57.408 "data_offset": 2048, 00:08:57.408 "data_size": 63488 00:08:57.408 }, 00:08:57.408 { 00:08:57.408 "name": "pt3", 00:08:57.408 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.408 "is_configured": true, 00:08:57.408 "data_offset": 2048, 00:08:57.408 "data_size": 63488 00:08:57.408 }, 00:08:57.408 { 00:08:57.408 "name": "pt4", 00:08:57.408 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:57.408 "is_configured": true, 00:08:57.408 "data_offset": 2048, 00:08:57.408 "data_size": 63488 00:08:57.408 } 00:08:57.408 ] 00:08:57.408 }' 00:08:57.408 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.408 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.667 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:57.667 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:57.667 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.667 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.667 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.667 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.667 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.667 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.667 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.667 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.667 [2024-10-01 14:33:49.314105] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.667 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.667 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.667 "name": "raid_bdev1", 00:08:57.667 "aliases": [ 00:08:57.667 "9c59d79e-5292-4173-b44d-d84b268a23ae" 00:08:57.667 ], 00:08:57.667 "product_name": "Raid Volume", 00:08:57.667 "block_size": 512, 00:08:57.667 "num_blocks": 253952, 00:08:57.667 "uuid": "9c59d79e-5292-4173-b44d-d84b268a23ae", 00:08:57.667 "assigned_rate_limits": { 00:08:57.667 "rw_ios_per_sec": 0, 00:08:57.667 "rw_mbytes_per_sec": 0, 00:08:57.667 "r_mbytes_per_sec": 0, 00:08:57.667 "w_mbytes_per_sec": 0 00:08:57.667 }, 00:08:57.667 "claimed": false, 00:08:57.667 "zoned": false, 00:08:57.667 "supported_io_types": { 00:08:57.667 "read": true, 00:08:57.667 "write": true, 00:08:57.667 "unmap": true, 00:08:57.667 "flush": true, 00:08:57.667 "reset": true, 00:08:57.667 "nvme_admin": false, 00:08:57.667 "nvme_io": false, 00:08:57.667 "nvme_io_md": false, 00:08:57.667 "write_zeroes": true, 00:08:57.667 "zcopy": false, 00:08:57.667 "get_zone_info": false, 00:08:57.667 "zone_management": false, 00:08:57.667 "zone_append": false, 00:08:57.667 "compare": false, 00:08:57.667 "compare_and_write": false, 00:08:57.667 "abort": false, 00:08:57.667 "seek_hole": false, 00:08:57.667 "seek_data": false, 00:08:57.667 "copy": false, 00:08:57.667 "nvme_iov_md": false 00:08:57.667 }, 00:08:57.667 "memory_domains": [ 00:08:57.667 { 00:08:57.667 "dma_device_id": "system", 00:08:57.667 "dma_device_type": 1 00:08:57.667 }, 00:08:57.667 { 00:08:57.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.667 "dma_device_type": 2 00:08:57.667 }, 00:08:57.667 { 00:08:57.667 "dma_device_id": "system", 00:08:57.667 "dma_device_type": 1 00:08:57.667 }, 00:08:57.667 { 00:08:57.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.667 "dma_device_type": 2 00:08:57.667 }, 00:08:57.667 { 00:08:57.667 "dma_device_id": "system", 00:08:57.667 "dma_device_type": 1 00:08:57.667 }, 00:08:57.667 { 00:08:57.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.667 "dma_device_type": 2 00:08:57.667 }, 00:08:57.667 { 00:08:57.667 "dma_device_id": "system", 00:08:57.667 "dma_device_type": 1 00:08:57.667 }, 00:08:57.667 { 00:08:57.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.667 "dma_device_type": 2 00:08:57.667 } 00:08:57.667 ], 00:08:57.667 "driver_specific": { 00:08:57.667 "raid": { 00:08:57.667 "uuid": "9c59d79e-5292-4173-b44d-d84b268a23ae", 00:08:57.667 "strip_size_kb": 64, 00:08:57.667 "state": "online", 00:08:57.667 "raid_level": "concat", 00:08:57.667 "superblock": true, 00:08:57.667 "num_base_bdevs": 4, 00:08:57.667 "num_base_bdevs_discovered": 4, 00:08:57.667 "num_base_bdevs_operational": 4, 00:08:57.667 "base_bdevs_list": [ 00:08:57.667 { 00:08:57.667 "name": "pt1", 00:08:57.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.667 "is_configured": true, 00:08:57.667 "data_offset": 2048, 00:08:57.667 "data_size": 63488 00:08:57.667 }, 00:08:57.667 { 00:08:57.667 "name": "pt2", 00:08:57.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.667 "is_configured": true, 00:08:57.667 "data_offset": 2048, 00:08:57.667 "data_size": 63488 00:08:57.667 }, 00:08:57.667 { 00:08:57.667 "name": "pt3", 00:08:57.667 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.667 "is_configured": true, 00:08:57.667 "data_offset": 2048, 00:08:57.667 "data_size": 63488 00:08:57.667 }, 00:08:57.667 { 00:08:57.667 "name": "pt4", 00:08:57.667 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:57.667 "is_configured": true, 00:08:57.667 "data_offset": 2048, 00:08:57.667 "data_size": 63488 00:08:57.667 } 00:08:57.667 ] 00:08:57.667 } 00:08:57.667 } 00:08:57.667 }' 00:08:57.667 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:57.925 pt2 00:08:57.925 pt3 00:08:57.925 pt4' 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.925 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 [2024-10-01 14:33:49.546110] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9c59d79e-5292-4173-b44d-d84b268a23ae 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9c59d79e-5292-4173-b44d-d84b268a23ae ']' 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 [2024-10-01 14:33:49.569805] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:57.926 [2024-10-01 14:33:49.569831] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.926 [2024-10-01 14:33:49.569897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.926 [2024-10-01 14:33:49.569966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.926 [2024-10-01 14:33:49.569981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:57.926 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.184 [2024-10-01 14:33:49.677844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:58.184 [2024-10-01 14:33:49.679669] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:58.184 [2024-10-01 14:33:49.679723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:58.184 [2024-10-01 14:33:49.679758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:08:58.184 [2024-10-01 14:33:49.679804] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:58.184 [2024-10-01 14:33:49.679849] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:58.184 [2024-10-01 14:33:49.679869] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:58.184 [2024-10-01 14:33:49.679888] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:08:58.184 [2024-10-01 14:33:49.679900] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.184 [2024-10-01 14:33:49.679911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:58.184 request: 00:08:58.184 { 00:08:58.184 "name": "raid_bdev1", 00:08:58.184 "raid_level": "concat", 00:08:58.184 "base_bdevs": [ 00:08:58.184 "malloc1", 00:08:58.184 "malloc2", 00:08:58.184 "malloc3", 00:08:58.184 "malloc4" 00:08:58.184 ], 00:08:58.184 "strip_size_kb": 64, 00:08:58.184 "superblock": false, 00:08:58.184 "method": "bdev_raid_create", 00:08:58.184 "req_id": 1 00:08:58.184 } 00:08:58.184 Got JSON-RPC error response 00:08:58.184 response: 00:08:58.184 { 00:08:58.184 "code": -17, 00:08:58.184 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:58.184 } 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.184 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.185 [2024-10-01 14:33:49.717826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:58.185 [2024-10-01 14:33:49.717874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.185 [2024-10-01 14:33:49.717890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:58.185 [2024-10-01 14:33:49.717900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.185 [2024-10-01 14:33:49.719993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.185 [2024-10-01 14:33:49.720029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:58.185 [2024-10-01 14:33:49.720096] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:58.185 [2024-10-01 14:33:49.720147] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:58.185 pt1 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.185 "name": "raid_bdev1", 00:08:58.185 "uuid": "9c59d79e-5292-4173-b44d-d84b268a23ae", 00:08:58.185 "strip_size_kb": 64, 00:08:58.185 "state": "configuring", 00:08:58.185 "raid_level": "concat", 00:08:58.185 "superblock": true, 00:08:58.185 "num_base_bdevs": 4, 00:08:58.185 "num_base_bdevs_discovered": 1, 00:08:58.185 "num_base_bdevs_operational": 4, 00:08:58.185 "base_bdevs_list": [ 00:08:58.185 { 00:08:58.185 "name": "pt1", 00:08:58.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.185 "is_configured": true, 00:08:58.185 "data_offset": 2048, 00:08:58.185 "data_size": 63488 00:08:58.185 }, 00:08:58.185 { 00:08:58.185 "name": null, 00:08:58.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.185 "is_configured": false, 00:08:58.185 "data_offset": 2048, 00:08:58.185 "data_size": 63488 00:08:58.185 }, 00:08:58.185 { 00:08:58.185 "name": null, 00:08:58.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.185 "is_configured": false, 00:08:58.185 "data_offset": 2048, 00:08:58.185 "data_size": 63488 00:08:58.185 }, 00:08:58.185 { 00:08:58.185 "name": null, 00:08:58.185 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:58.185 "is_configured": false, 00:08:58.185 "data_offset": 2048, 00:08:58.185 "data_size": 63488 00:08:58.185 } 00:08:58.185 ] 00:08:58.185 }' 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.185 14:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.443 [2024-10-01 14:33:50.025919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:58.443 [2024-10-01 14:33:50.025981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.443 [2024-10-01 14:33:50.026000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:08:58.443 [2024-10-01 14:33:50.026010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.443 [2024-10-01 14:33:50.026431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.443 [2024-10-01 14:33:50.026449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:58.443 [2024-10-01 14:33:50.026520] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:58.443 [2024-10-01 14:33:50.026541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:58.443 pt2 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.443 [2024-10-01 14:33:50.037933] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.443 "name": "raid_bdev1", 00:08:58.443 "uuid": "9c59d79e-5292-4173-b44d-d84b268a23ae", 00:08:58.443 "strip_size_kb": 64, 00:08:58.443 "state": "configuring", 00:08:58.443 "raid_level": "concat", 00:08:58.443 "superblock": true, 00:08:58.443 "num_base_bdevs": 4, 00:08:58.443 "num_base_bdevs_discovered": 1, 00:08:58.443 "num_base_bdevs_operational": 4, 00:08:58.443 "base_bdevs_list": [ 00:08:58.443 { 00:08:58.443 "name": "pt1", 00:08:58.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.443 "is_configured": true, 00:08:58.443 "data_offset": 2048, 00:08:58.443 "data_size": 63488 00:08:58.443 }, 00:08:58.443 { 00:08:58.443 "name": null, 00:08:58.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.443 "is_configured": false, 00:08:58.443 "data_offset": 0, 00:08:58.443 "data_size": 63488 00:08:58.443 }, 00:08:58.443 { 00:08:58.443 "name": null, 00:08:58.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.443 "is_configured": false, 00:08:58.443 "data_offset": 2048, 00:08:58.443 "data_size": 63488 00:08:58.443 }, 00:08:58.443 { 00:08:58.443 "name": null, 00:08:58.443 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:58.443 "is_configured": false, 00:08:58.443 "data_offset": 2048, 00:08:58.443 "data_size": 63488 00:08:58.443 } 00:08:58.443 ] 00:08:58.443 }' 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.443 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.701 [2024-10-01 14:33:50.354005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:58.701 [2024-10-01 14:33:50.354065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.701 [2024-10-01 14:33:50.354082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:08:58.701 [2024-10-01 14:33:50.354092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.701 [2024-10-01 14:33:50.354492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.701 [2024-10-01 14:33:50.354506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:58.701 [2024-10-01 14:33:50.354577] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:58.701 [2024-10-01 14:33:50.354599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:58.701 pt2 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.701 [2024-10-01 14:33:50.361990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:58.701 [2024-10-01 14:33:50.362036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.701 [2024-10-01 14:33:50.362056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:08:58.701 [2024-10-01 14:33:50.362064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.701 [2024-10-01 14:33:50.362416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.701 [2024-10-01 14:33:50.362434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:58.701 [2024-10-01 14:33:50.362492] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:58.701 [2024-10-01 14:33:50.362509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:58.701 pt3 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.701 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.701 [2024-10-01 14:33:50.369963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:08:58.701 [2024-10-01 14:33:50.370003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.701 [2024-10-01 14:33:50.370018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:08:58.701 [2024-10-01 14:33:50.370026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.701 [2024-10-01 14:33:50.370378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.701 [2024-10-01 14:33:50.370395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:08:58.701 [2024-10-01 14:33:50.370449] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:08:58.702 [2024-10-01 14:33:50.370468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:08:58.702 [2024-10-01 14:33:50.370593] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:58.702 [2024-10-01 14:33:50.370602] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:08:58.702 [2024-10-01 14:33:50.370839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:58.702 [2024-10-01 14:33:50.370968] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:58.702 [2024-10-01 14:33:50.370978] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:58.702 [2024-10-01 14:33:50.371094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.702 pt4 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.702 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.010 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.010 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.010 "name": "raid_bdev1", 00:08:59.010 "uuid": "9c59d79e-5292-4173-b44d-d84b268a23ae", 00:08:59.010 "strip_size_kb": 64, 00:08:59.010 "state": "online", 00:08:59.010 "raid_level": "concat", 00:08:59.010 "superblock": true, 00:08:59.010 "num_base_bdevs": 4, 00:08:59.010 "num_base_bdevs_discovered": 4, 00:08:59.010 "num_base_bdevs_operational": 4, 00:08:59.010 "base_bdevs_list": [ 00:08:59.010 { 00:08:59.010 "name": "pt1", 00:08:59.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.010 "is_configured": true, 00:08:59.010 "data_offset": 2048, 00:08:59.010 "data_size": 63488 00:08:59.010 }, 00:08:59.010 { 00:08:59.010 "name": "pt2", 00:08:59.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.010 "is_configured": true, 00:08:59.010 "data_offset": 2048, 00:08:59.010 "data_size": 63488 00:08:59.010 }, 00:08:59.010 { 00:08:59.010 "name": "pt3", 00:08:59.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.010 "is_configured": true, 00:08:59.010 "data_offset": 2048, 00:08:59.010 "data_size": 63488 00:08:59.010 }, 00:08:59.010 { 00:08:59.010 "name": "pt4", 00:08:59.010 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:59.010 "is_configured": true, 00:08:59.010 "data_offset": 2048, 00:08:59.010 "data_size": 63488 00:08:59.010 } 00:08:59.010 ] 00:08:59.010 }' 00:08:59.010 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.010 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.297 [2024-10-01 14:33:50.706421] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.297 "name": "raid_bdev1", 00:08:59.297 "aliases": [ 00:08:59.297 "9c59d79e-5292-4173-b44d-d84b268a23ae" 00:08:59.297 ], 00:08:59.297 "product_name": "Raid Volume", 00:08:59.297 "block_size": 512, 00:08:59.297 "num_blocks": 253952, 00:08:59.297 "uuid": "9c59d79e-5292-4173-b44d-d84b268a23ae", 00:08:59.297 "assigned_rate_limits": { 00:08:59.297 "rw_ios_per_sec": 0, 00:08:59.297 "rw_mbytes_per_sec": 0, 00:08:59.297 "r_mbytes_per_sec": 0, 00:08:59.297 "w_mbytes_per_sec": 0 00:08:59.297 }, 00:08:59.297 "claimed": false, 00:08:59.297 "zoned": false, 00:08:59.297 "supported_io_types": { 00:08:59.297 "read": true, 00:08:59.297 "write": true, 00:08:59.297 "unmap": true, 00:08:59.297 "flush": true, 00:08:59.297 "reset": true, 00:08:59.297 "nvme_admin": false, 00:08:59.297 "nvme_io": false, 00:08:59.297 "nvme_io_md": false, 00:08:59.297 "write_zeroes": true, 00:08:59.297 "zcopy": false, 00:08:59.297 "get_zone_info": false, 00:08:59.297 "zone_management": false, 00:08:59.297 "zone_append": false, 00:08:59.297 "compare": false, 00:08:59.297 "compare_and_write": false, 00:08:59.297 "abort": false, 00:08:59.297 "seek_hole": false, 00:08:59.297 "seek_data": false, 00:08:59.297 "copy": false, 00:08:59.297 "nvme_iov_md": false 00:08:59.297 }, 00:08:59.297 "memory_domains": [ 00:08:59.297 { 00:08:59.297 "dma_device_id": "system", 00:08:59.297 "dma_device_type": 1 00:08:59.297 }, 00:08:59.297 { 00:08:59.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.297 "dma_device_type": 2 00:08:59.297 }, 00:08:59.297 { 00:08:59.297 "dma_device_id": "system", 00:08:59.297 "dma_device_type": 1 00:08:59.297 }, 00:08:59.297 { 00:08:59.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.297 "dma_device_type": 2 00:08:59.297 }, 00:08:59.297 { 00:08:59.297 "dma_device_id": "system", 00:08:59.297 "dma_device_type": 1 00:08:59.297 }, 00:08:59.297 { 00:08:59.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.297 "dma_device_type": 2 00:08:59.297 }, 00:08:59.297 { 00:08:59.297 "dma_device_id": "system", 00:08:59.297 "dma_device_type": 1 00:08:59.297 }, 00:08:59.297 { 00:08:59.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.297 "dma_device_type": 2 00:08:59.297 } 00:08:59.297 ], 00:08:59.297 "driver_specific": { 00:08:59.297 "raid": { 00:08:59.297 "uuid": "9c59d79e-5292-4173-b44d-d84b268a23ae", 00:08:59.297 "strip_size_kb": 64, 00:08:59.297 "state": "online", 00:08:59.297 "raid_level": "concat", 00:08:59.297 "superblock": true, 00:08:59.297 "num_base_bdevs": 4, 00:08:59.297 "num_base_bdevs_discovered": 4, 00:08:59.297 "num_base_bdevs_operational": 4, 00:08:59.297 "base_bdevs_list": [ 00:08:59.297 { 00:08:59.297 "name": "pt1", 00:08:59.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.297 "is_configured": true, 00:08:59.297 "data_offset": 2048, 00:08:59.297 "data_size": 63488 00:08:59.297 }, 00:08:59.297 { 00:08:59.297 "name": "pt2", 00:08:59.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.297 "is_configured": true, 00:08:59.297 "data_offset": 2048, 00:08:59.297 "data_size": 63488 00:08:59.297 }, 00:08:59.297 { 00:08:59.297 "name": "pt3", 00:08:59.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.297 "is_configured": true, 00:08:59.297 "data_offset": 2048, 00:08:59.297 "data_size": 63488 00:08:59.297 }, 00:08:59.297 { 00:08:59.297 "name": "pt4", 00:08:59.297 "uuid": "00000000-0000-0000-0000-000000000004", 00:08:59.297 "is_configured": true, 00:08:59.297 "data_offset": 2048, 00:08:59.297 "data_size": 63488 00:08:59.297 } 00:08:59.297 ] 00:08:59.297 } 00:08:59.297 } 00:08:59.297 }' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:59.297 pt2 00:08:59.297 pt3 00:08:59.297 pt4' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.297 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:59.298 [2024-10-01 14:33:50.950441] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.298 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.298 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9c59d79e-5292-4173-b44d-d84b268a23ae '!=' 9c59d79e-5292-4173-b44d-d84b268a23ae ']' 00:08:59.298 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:59.298 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.298 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:59.298 14:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70948 00:08:59.298 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 70948 ']' 00:08:59.298 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 70948 00:08:59.298 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:59.555 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.555 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70948 00:08:59.555 killing process with pid 70948 00:08:59.555 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.555 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.555 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70948' 00:08:59.555 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 70948 00:08:59.555 [2024-10-01 14:33:50.997046] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:59.555 14:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 70948 00:08:59.555 [2024-10-01 14:33:50.997121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.555 [2024-10-01 14:33:50.997192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.555 [2024-10-01 14:33:50.997202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:59.813 [2024-10-01 14:33:51.239242] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.378 14:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:00.378 00:09:00.378 real 0m4.167s 00:09:00.378 user 0m5.934s 00:09:00.378 sys 0m0.670s 00:09:00.378 14:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.378 ************************************ 00:09:00.378 END TEST raid_superblock_test 00:09:00.378 ************************************ 00:09:00.378 14:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.636 14:33:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:09:00.636 14:33:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:00.636 14:33:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.636 14:33:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.636 ************************************ 00:09:00.636 START TEST raid_read_error_test 00:09:00.636 ************************************ 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:00.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.x50HE0Q2BH 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71196 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71196 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 71196 ']' 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.636 14:33:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.637 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:00.637 [2024-10-01 14:33:52.150282] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:09:00.637 [2024-10-01 14:33:52.150508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71196 ] 00:09:00.637 [2024-10-01 14:33:52.301251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.894 [2024-10-01 14:33:52.488662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.157 [2024-10-01 14:33:52.625233] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.157 [2024-10-01 14:33:52.625432] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.415 14:33:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.415 14:33:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:01.415 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.415 14:33:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:01.415 14:33:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.415 14:33:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.415 BaseBdev1_malloc 00:09:01.415 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.415 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:01.415 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.415 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.415 true 00:09:01.415 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.415 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:01.415 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.415 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.415 [2024-10-01 14:33:53.030439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:01.415 [2024-10-01 14:33:53.030599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.415 [2024-10-01 14:33:53.030676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:01.415 [2024-10-01 14:33:53.030747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.415 [2024-10-01 14:33:53.032987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.415 [2024-10-01 14:33:53.033101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:01.416 BaseBdev1 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.416 BaseBdev2_malloc 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.416 true 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.416 [2024-10-01 14:33:53.086494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:01.416 [2024-10-01 14:33:53.086551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.416 [2024-10-01 14:33:53.086567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:01.416 [2024-10-01 14:33:53.086578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.416 [2024-10-01 14:33:53.088699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.416 [2024-10-01 14:33:53.088742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:01.416 BaseBdev2 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.416 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.674 BaseBdev3_malloc 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.674 true 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.674 [2024-10-01 14:33:53.130788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:01.674 [2024-10-01 14:33:53.130834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.674 [2024-10-01 14:33:53.130850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:01.674 [2024-10-01 14:33:53.130860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.674 [2024-10-01 14:33:53.132951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.674 [2024-10-01 14:33:53.132986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:01.674 BaseBdev3 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.674 BaseBdev4_malloc 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.674 true 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.674 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.674 [2024-10-01 14:33:53.174874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:01.675 [2024-10-01 14:33:53.174926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.675 [2024-10-01 14:33:53.174944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:01.675 [2024-10-01 14:33:53.174954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.675 [2024-10-01 14:33:53.177052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.675 [2024-10-01 14:33:53.177092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:01.675 BaseBdev4 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.675 [2024-10-01 14:33:53.182957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.675 [2024-10-01 14:33:53.184812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.675 [2024-10-01 14:33:53.184886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.675 [2024-10-01 14:33:53.184949] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:01.675 [2024-10-01 14:33:53.185177] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:01.675 [2024-10-01 14:33:53.185192] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:01.675 [2024-10-01 14:33:53.185446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:01.675 [2024-10-01 14:33:53.185587] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:01.675 [2024-10-01 14:33:53.185595] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:01.675 [2024-10-01 14:33:53.185757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.675 "name": "raid_bdev1", 00:09:01.675 "uuid": "ab72b06d-4bb5-48f7-82d8-60c0d21228bc", 00:09:01.675 "strip_size_kb": 64, 00:09:01.675 "state": "online", 00:09:01.675 "raid_level": "concat", 00:09:01.675 "superblock": true, 00:09:01.675 "num_base_bdevs": 4, 00:09:01.675 "num_base_bdevs_discovered": 4, 00:09:01.675 "num_base_bdevs_operational": 4, 00:09:01.675 "base_bdevs_list": [ 00:09:01.675 { 00:09:01.675 "name": "BaseBdev1", 00:09:01.675 "uuid": "14e80e16-928b-5be8-88a1-b01fd8db2f97", 00:09:01.675 "is_configured": true, 00:09:01.675 "data_offset": 2048, 00:09:01.675 "data_size": 63488 00:09:01.675 }, 00:09:01.675 { 00:09:01.675 "name": "BaseBdev2", 00:09:01.675 "uuid": "96abcbce-89af-5d21-bacb-c059b92c0e00", 00:09:01.675 "is_configured": true, 00:09:01.675 "data_offset": 2048, 00:09:01.675 "data_size": 63488 00:09:01.675 }, 00:09:01.675 { 00:09:01.675 "name": "BaseBdev3", 00:09:01.675 "uuid": "78039b16-5877-567b-965f-4f8205cf854f", 00:09:01.675 "is_configured": true, 00:09:01.675 "data_offset": 2048, 00:09:01.675 "data_size": 63488 00:09:01.675 }, 00:09:01.675 { 00:09:01.675 "name": "BaseBdev4", 00:09:01.675 "uuid": "430c4923-24e5-5e7e-ab1a-2b07b327839c", 00:09:01.675 "is_configured": true, 00:09:01.675 "data_offset": 2048, 00:09:01.675 "data_size": 63488 00:09:01.675 } 00:09:01.675 ] 00:09:01.675 }' 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.675 14:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.933 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:01.933 14:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:01.933 [2024-10-01 14:33:53.599985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.883 "name": "raid_bdev1", 00:09:02.883 "uuid": "ab72b06d-4bb5-48f7-82d8-60c0d21228bc", 00:09:02.883 "strip_size_kb": 64, 00:09:02.883 "state": "online", 00:09:02.883 "raid_level": "concat", 00:09:02.883 "superblock": true, 00:09:02.883 "num_base_bdevs": 4, 00:09:02.883 "num_base_bdevs_discovered": 4, 00:09:02.883 "num_base_bdevs_operational": 4, 00:09:02.883 "base_bdevs_list": [ 00:09:02.883 { 00:09:02.883 "name": "BaseBdev1", 00:09:02.883 "uuid": "14e80e16-928b-5be8-88a1-b01fd8db2f97", 00:09:02.883 "is_configured": true, 00:09:02.883 "data_offset": 2048, 00:09:02.883 "data_size": 63488 00:09:02.883 }, 00:09:02.883 { 00:09:02.883 "name": "BaseBdev2", 00:09:02.883 "uuid": "96abcbce-89af-5d21-bacb-c059b92c0e00", 00:09:02.883 "is_configured": true, 00:09:02.883 "data_offset": 2048, 00:09:02.883 "data_size": 63488 00:09:02.883 }, 00:09:02.883 { 00:09:02.883 "name": "BaseBdev3", 00:09:02.883 "uuid": "78039b16-5877-567b-965f-4f8205cf854f", 00:09:02.883 "is_configured": true, 00:09:02.883 "data_offset": 2048, 00:09:02.883 "data_size": 63488 00:09:02.883 }, 00:09:02.883 { 00:09:02.883 "name": "BaseBdev4", 00:09:02.883 "uuid": "430c4923-24e5-5e7e-ab1a-2b07b327839c", 00:09:02.883 "is_configured": true, 00:09:02.883 "data_offset": 2048, 00:09:02.883 "data_size": 63488 00:09:02.883 } 00:09:02.883 ] 00:09:02.883 }' 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.883 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.447 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:03.447 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.448 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.448 [2024-10-01 14:33:54.845736] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.448 [2024-10-01 14:33:54.845894] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.448 [2024-10-01 14:33:54.848977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.448 [2024-10-01 14:33:54.849128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.448 [2024-10-01 14:33:54.849235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.448 [2024-10-01 14:33:54.849310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:09:03.448 "results": [ 00:09:03.448 { 00:09:03.448 "job": "raid_bdev1", 00:09:03.448 "core_mask": "0x1", 00:09:03.448 "workload": "randrw", 00:09:03.448 "percentage": 50, 00:09:03.448 "status": "finished", 00:09:03.448 "queue_depth": 1, 00:09:03.448 "io_size": 131072, 00:09:03.448 "runtime": 1.244057, 00:09:03.448 "iops": 14675.372591448784, 00:09:03.448 "mibps": 1834.421573931098, 00:09:03.448 "io_failed": 1, 00:09:03.448 "io_timeout": 0, 00:09:03.448 "avg_latency_us": 93.29351567700566, 00:09:03.448 "min_latency_us": 33.28, 00:09:03.448 "max_latency_us": 1688.8123076923077 00:09:03.448 } 00:09:03.448 ], 00:09:03.448 "core_count": 1 00:09:03.448 } 00:09:03.448 te offline 00:09:03.448 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.448 14:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71196 00:09:03.448 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 71196 ']' 00:09:03.448 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 71196 00:09:03.448 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:03.448 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.448 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71196 00:09:03.448 killing process with pid 71196 00:09:03.448 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:03.448 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:03.448 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71196' 00:09:03.448 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 71196 00:09:03.448 14:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 71196 00:09:03.448 [2024-10-01 14:33:54.872028] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.448 [2024-10-01 14:33:55.081306] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.381 14:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.x50HE0Q2BH 00:09:04.381 14:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:04.381 14:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:04.381 ************************************ 00:09:04.381 END TEST raid_read_error_test 00:09:04.381 ************************************ 00:09:04.381 14:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:09:04.381 14:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:04.381 14:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:04.381 14:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:04.381 14:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:09:04.381 00:09:04.381 real 0m3.891s 00:09:04.381 user 0m4.575s 00:09:04.381 sys 0m0.393s 00:09:04.381 14:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.381 14:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.381 14:33:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:09:04.381 14:33:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:04.381 14:33:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.381 14:33:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.381 ************************************ 00:09:04.381 START TEST raid_write_error_test 00:09:04.381 ************************************ 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4yBovJm2rn 00:09:04.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71335 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71335 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71335 ']' 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:04.382 14:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.639 [2024-10-01 14:33:56.075474] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:09:04.639 [2024-10-01 14:33:56.075953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71335 ] 00:09:04.639 [2024-10-01 14:33:56.225220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.897 [2024-10-01 14:33:56.420946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.897 [2024-10-01 14:33:56.558549] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.897 [2024-10-01 14:33:56.558592] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.462 14:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.462 14:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:05.462 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.462 14:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:05.462 14:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.462 14:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.462 BaseBdev1_malloc 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.462 true 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.462 [2024-10-01 14:33:57.028741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:05.462 [2024-10-01 14:33:57.028796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.462 [2024-10-01 14:33:57.028814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:05.462 [2024-10-01 14:33:57.028825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.462 [2024-10-01 14:33:57.031064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.462 [2024-10-01 14:33:57.031104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:05.462 BaseBdev1 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.462 BaseBdev2_malloc 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.462 true 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.462 [2024-10-01 14:33:57.086420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:05.462 [2024-10-01 14:33:57.086480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.462 [2024-10-01 14:33:57.086501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:05.462 [2024-10-01 14:33:57.086512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.462 [2024-10-01 14:33:57.088688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.462 [2024-10-01 14:33:57.088865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:05.462 BaseBdev2 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:05.462 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.463 BaseBdev3_malloc 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.463 true 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.463 [2024-10-01 14:33:57.130689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:05.463 [2024-10-01 14:33:57.130759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.463 [2024-10-01 14:33:57.130778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:05.463 [2024-10-01 14:33:57.130795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.463 [2024-10-01 14:33:57.133257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.463 [2024-10-01 14:33:57.133304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:05.463 BaseBdev3 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.463 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 BaseBdev4_malloc 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 true 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 [2024-10-01 14:33:57.175133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:05.721 [2024-10-01 14:33:57.175185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.721 [2024-10-01 14:33:57.175204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:05.721 [2024-10-01 14:33:57.175216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.721 [2024-10-01 14:33:57.177428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.721 [2024-10-01 14:33:57.177474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:05.721 BaseBdev4 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 [2024-10-01 14:33:57.183220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.721 [2024-10-01 14:33:57.185152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.721 [2024-10-01 14:33:57.185229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.721 [2024-10-01 14:33:57.185293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:05.721 [2024-10-01 14:33:57.185578] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:05.721 [2024-10-01 14:33:57.185593] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:05.721 [2024-10-01 14:33:57.185876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:05.721 [2024-10-01 14:33:57.186027] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:05.721 [2024-10-01 14:33:57.186041] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:05.721 [2024-10-01 14:33:57.186204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.721 "name": "raid_bdev1", 00:09:05.721 "uuid": "450ff802-0c0b-4ccc-8fe5-d53a5e14f7ec", 00:09:05.721 "strip_size_kb": 64, 00:09:05.721 "state": "online", 00:09:05.721 "raid_level": "concat", 00:09:05.721 "superblock": true, 00:09:05.721 "num_base_bdevs": 4, 00:09:05.721 "num_base_bdevs_discovered": 4, 00:09:05.721 "num_base_bdevs_operational": 4, 00:09:05.721 "base_bdevs_list": [ 00:09:05.721 { 00:09:05.721 "name": "BaseBdev1", 00:09:05.721 "uuid": "bc095692-e5dc-5b9e-9c96-721977e54940", 00:09:05.721 "is_configured": true, 00:09:05.721 "data_offset": 2048, 00:09:05.721 "data_size": 63488 00:09:05.721 }, 00:09:05.721 { 00:09:05.721 "name": "BaseBdev2", 00:09:05.721 "uuid": "3b8c3d5f-1363-5ffc-ac21-4240e7154e19", 00:09:05.721 "is_configured": true, 00:09:05.721 "data_offset": 2048, 00:09:05.721 "data_size": 63488 00:09:05.721 }, 00:09:05.721 { 00:09:05.721 "name": "BaseBdev3", 00:09:05.721 "uuid": "d3f6e546-7de7-52f8-bd85-c2b83c7737dd", 00:09:05.721 "is_configured": true, 00:09:05.721 "data_offset": 2048, 00:09:05.721 "data_size": 63488 00:09:05.721 }, 00:09:05.721 { 00:09:05.721 "name": "BaseBdev4", 00:09:05.721 "uuid": "64665f2b-5ad0-53b0-9c75-96db2fabf803", 00:09:05.721 "is_configured": true, 00:09:05.721 "data_offset": 2048, 00:09:05.721 "data_size": 63488 00:09:05.721 } 00:09:05.721 ] 00:09:05.721 }' 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.721 14:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.978 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:05.978 14:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:06.235 [2024-10-01 14:33:57.684279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.167 "name": "raid_bdev1", 00:09:07.167 "uuid": "450ff802-0c0b-4ccc-8fe5-d53a5e14f7ec", 00:09:07.167 "strip_size_kb": 64, 00:09:07.167 "state": "online", 00:09:07.167 "raid_level": "concat", 00:09:07.167 "superblock": true, 00:09:07.167 "num_base_bdevs": 4, 00:09:07.167 "num_base_bdevs_discovered": 4, 00:09:07.167 "num_base_bdevs_operational": 4, 00:09:07.167 "base_bdevs_list": [ 00:09:07.167 { 00:09:07.167 "name": "BaseBdev1", 00:09:07.167 "uuid": "bc095692-e5dc-5b9e-9c96-721977e54940", 00:09:07.167 "is_configured": true, 00:09:07.167 "data_offset": 2048, 00:09:07.167 "data_size": 63488 00:09:07.167 }, 00:09:07.167 { 00:09:07.167 "name": "BaseBdev2", 00:09:07.167 "uuid": "3b8c3d5f-1363-5ffc-ac21-4240e7154e19", 00:09:07.167 "is_configured": true, 00:09:07.167 "data_offset": 2048, 00:09:07.167 "data_size": 63488 00:09:07.167 }, 00:09:07.167 { 00:09:07.167 "name": "BaseBdev3", 00:09:07.167 "uuid": "d3f6e546-7de7-52f8-bd85-c2b83c7737dd", 00:09:07.167 "is_configured": true, 00:09:07.167 "data_offset": 2048, 00:09:07.167 "data_size": 63488 00:09:07.167 }, 00:09:07.167 { 00:09:07.167 "name": "BaseBdev4", 00:09:07.167 "uuid": "64665f2b-5ad0-53b0-9c75-96db2fabf803", 00:09:07.167 "is_configured": true, 00:09:07.167 "data_offset": 2048, 00:09:07.167 "data_size": 63488 00:09:07.167 } 00:09:07.167 ] 00:09:07.167 }' 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.167 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.425 [2024-10-01 14:33:58.958888] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.425 [2024-10-01 14:33:58.959043] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.425 [2024-10-01 14:33:58.962186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.425 [2024-10-01 14:33:58.962245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.425 [2024-10-01 14:33:58.962291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.425 [2024-10-01 14:33:58.962302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:09:07.425 { 00:09:07.425 "results": [ 00:09:07.425 { 00:09:07.425 "job": "raid_bdev1", 00:09:07.425 "core_mask": "0x1", 00:09:07.425 "workload": "randrw", 00:09:07.425 "percentage": 50, 00:09:07.425 "status": "finished", 00:09:07.425 "queue_depth": 1, 00:09:07.425 "io_size": 131072, 00:09:07.425 "runtime": 1.272667, 00:09:07.425 "iops": 14228.388101522236, 00:09:07.425 "mibps": 1778.5485126902795, 00:09:07.425 "io_failed": 1, 00:09:07.425 "io_timeout": 0, 00:09:07.425 "avg_latency_us": 96.37953554756028, 00:09:07.425 "min_latency_us": 33.28, 00:09:07.425 "max_latency_us": 1739.2246153846154 00:09:07.425 } 00:09:07.425 ], 00:09:07.425 "core_count": 1 00:09:07.425 } 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71335 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71335 ']' 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71335 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71335 00:09:07.425 killing process with pid 71335 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71335' 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71335 00:09:07.425 [2024-10-01 14:33:58.988649] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.425 14:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71335 00:09:07.683 [2024-10-01 14:33:59.195458] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.614 14:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4yBovJm2rn 00:09:08.614 14:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:08.614 14:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:08.614 ************************************ 00:09:08.614 END TEST raid_write_error_test 00:09:08.614 ************************************ 00:09:08.614 14:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:09:08.614 14:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:08.614 14:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.614 14:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.614 14:34:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:09:08.614 00:09:08.614 real 0m4.063s 00:09:08.614 user 0m4.954s 00:09:08.614 sys 0m0.406s 00:09:08.614 14:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.614 14:34:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.614 14:34:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:08.614 14:34:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:09:08.614 14:34:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:08.614 14:34:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.614 14:34:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.614 ************************************ 00:09:08.614 START TEST raid_state_function_test 00:09:08.614 ************************************ 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:08.614 Process raid pid: 71473 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71473 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71473' 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71473 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71473 ']' 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.614 14:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.614 [2024-10-01 14:34:00.174000] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:09:08.614 [2024-10-01 14:34:00.174128] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.871 [2024-10-01 14:34:00.322674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.871 [2024-10-01 14:34:00.513251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.128 [2024-10-01 14:34:00.654286] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.128 [2024-10-01 14:34:00.654503] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.699 [2024-10-01 14:34:01.126745] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.699 [2024-10-01 14:34:01.126943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.699 [2024-10-01 14:34:01.126962] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.699 [2024-10-01 14:34:01.126973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.699 [2024-10-01 14:34:01.126979] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.699 [2024-10-01 14:34:01.126988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.699 [2024-10-01 14:34:01.126994] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:09.699 [2024-10-01 14:34:01.127005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.699 "name": "Existed_Raid", 00:09:09.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.699 "strip_size_kb": 0, 00:09:09.699 "state": "configuring", 00:09:09.699 "raid_level": "raid1", 00:09:09.699 "superblock": false, 00:09:09.699 "num_base_bdevs": 4, 00:09:09.699 "num_base_bdevs_discovered": 0, 00:09:09.699 "num_base_bdevs_operational": 4, 00:09:09.699 "base_bdevs_list": [ 00:09:09.699 { 00:09:09.699 "name": "BaseBdev1", 00:09:09.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.699 "is_configured": false, 00:09:09.699 "data_offset": 0, 00:09:09.699 "data_size": 0 00:09:09.699 }, 00:09:09.699 { 00:09:09.699 "name": "BaseBdev2", 00:09:09.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.699 "is_configured": false, 00:09:09.699 "data_offset": 0, 00:09:09.699 "data_size": 0 00:09:09.699 }, 00:09:09.699 { 00:09:09.699 "name": "BaseBdev3", 00:09:09.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.699 "is_configured": false, 00:09:09.699 "data_offset": 0, 00:09:09.699 "data_size": 0 00:09:09.699 }, 00:09:09.699 { 00:09:09.699 "name": "BaseBdev4", 00:09:09.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.699 "is_configured": false, 00:09:09.699 "data_offset": 0, 00:09:09.699 "data_size": 0 00:09:09.699 } 00:09:09.699 ] 00:09:09.699 }' 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.699 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.958 [2024-10-01 14:34:01.510779] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.958 [2024-10-01 14:34:01.510818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.958 [2024-10-01 14:34:01.518787] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.958 [2024-10-01 14:34:01.518828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.958 [2024-10-01 14:34:01.518837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.958 [2024-10-01 14:34:01.518846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.958 [2024-10-01 14:34:01.518853] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.958 [2024-10-01 14:34:01.518862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.958 [2024-10-01 14:34:01.518868] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:09.958 [2024-10-01 14:34:01.518875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.958 [2024-10-01 14:34:01.566340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.958 BaseBdev1 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.958 [ 00:09:09.958 { 00:09:09.958 "name": "BaseBdev1", 00:09:09.958 "aliases": [ 00:09:09.958 "208343c4-7704-47a5-820b-bec6e8820107" 00:09:09.958 ], 00:09:09.958 "product_name": "Malloc disk", 00:09:09.958 "block_size": 512, 00:09:09.958 "num_blocks": 65536, 00:09:09.958 "uuid": "208343c4-7704-47a5-820b-bec6e8820107", 00:09:09.958 "assigned_rate_limits": { 00:09:09.958 "rw_ios_per_sec": 0, 00:09:09.958 "rw_mbytes_per_sec": 0, 00:09:09.958 "r_mbytes_per_sec": 0, 00:09:09.958 "w_mbytes_per_sec": 0 00:09:09.958 }, 00:09:09.958 "claimed": true, 00:09:09.958 "claim_type": "exclusive_write", 00:09:09.958 "zoned": false, 00:09:09.958 "supported_io_types": { 00:09:09.958 "read": true, 00:09:09.958 "write": true, 00:09:09.958 "unmap": true, 00:09:09.958 "flush": true, 00:09:09.958 "reset": true, 00:09:09.958 "nvme_admin": false, 00:09:09.958 "nvme_io": false, 00:09:09.958 "nvme_io_md": false, 00:09:09.958 "write_zeroes": true, 00:09:09.958 "zcopy": true, 00:09:09.958 "get_zone_info": false, 00:09:09.958 "zone_management": false, 00:09:09.958 "zone_append": false, 00:09:09.958 "compare": false, 00:09:09.958 "compare_and_write": false, 00:09:09.958 "abort": true, 00:09:09.958 "seek_hole": false, 00:09:09.958 "seek_data": false, 00:09:09.958 "copy": true, 00:09:09.958 "nvme_iov_md": false 00:09:09.958 }, 00:09:09.958 "memory_domains": [ 00:09:09.958 { 00:09:09.958 "dma_device_id": "system", 00:09:09.958 "dma_device_type": 1 00:09:09.958 }, 00:09:09.958 { 00:09:09.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.958 "dma_device_type": 2 00:09:09.958 } 00:09:09.958 ], 00:09:09.958 "driver_specific": {} 00:09:09.958 } 00:09:09.958 ] 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.958 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.959 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.959 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.959 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.959 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.959 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.959 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.959 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.959 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.959 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.959 "name": "Existed_Raid", 00:09:09.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.959 "strip_size_kb": 0, 00:09:09.959 "state": "configuring", 00:09:09.959 "raid_level": "raid1", 00:09:09.959 "superblock": false, 00:09:09.959 "num_base_bdevs": 4, 00:09:09.959 "num_base_bdevs_discovered": 1, 00:09:09.959 "num_base_bdevs_operational": 4, 00:09:09.959 "base_bdevs_list": [ 00:09:09.959 { 00:09:09.959 "name": "BaseBdev1", 00:09:09.959 "uuid": "208343c4-7704-47a5-820b-bec6e8820107", 00:09:09.959 "is_configured": true, 00:09:09.959 "data_offset": 0, 00:09:09.959 "data_size": 65536 00:09:09.959 }, 00:09:09.959 { 00:09:09.959 "name": "BaseBdev2", 00:09:09.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.959 "is_configured": false, 00:09:09.959 "data_offset": 0, 00:09:09.959 "data_size": 0 00:09:09.959 }, 00:09:09.959 { 00:09:09.959 "name": "BaseBdev3", 00:09:09.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.959 "is_configured": false, 00:09:09.959 "data_offset": 0, 00:09:09.959 "data_size": 0 00:09:09.959 }, 00:09:09.959 { 00:09:09.959 "name": "BaseBdev4", 00:09:09.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.959 "is_configured": false, 00:09:09.959 "data_offset": 0, 00:09:09.959 "data_size": 0 00:09:09.959 } 00:09:09.959 ] 00:09:09.959 }' 00:09:09.959 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.959 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.524 [2024-10-01 14:34:01.922466] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:10.524 [2024-10-01 14:34:01.922632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.524 [2024-10-01 14:34:01.930520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.524 [2024-10-01 14:34:01.932657] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.524 [2024-10-01 14:34:01.932814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.524 [2024-10-01 14:34:01.932880] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:10.524 [2024-10-01 14:34:01.932999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:10.524 [2024-10-01 14:34:01.933055] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:10.524 [2024-10-01 14:34:01.933069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.524 "name": "Existed_Raid", 00:09:10.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.524 "strip_size_kb": 0, 00:09:10.524 "state": "configuring", 00:09:10.524 "raid_level": "raid1", 00:09:10.524 "superblock": false, 00:09:10.524 "num_base_bdevs": 4, 00:09:10.524 "num_base_bdevs_discovered": 1, 00:09:10.524 "num_base_bdevs_operational": 4, 00:09:10.524 "base_bdevs_list": [ 00:09:10.524 { 00:09:10.524 "name": "BaseBdev1", 00:09:10.524 "uuid": "208343c4-7704-47a5-820b-bec6e8820107", 00:09:10.524 "is_configured": true, 00:09:10.524 "data_offset": 0, 00:09:10.524 "data_size": 65536 00:09:10.524 }, 00:09:10.524 { 00:09:10.524 "name": "BaseBdev2", 00:09:10.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.524 "is_configured": false, 00:09:10.524 "data_offset": 0, 00:09:10.524 "data_size": 0 00:09:10.524 }, 00:09:10.524 { 00:09:10.524 "name": "BaseBdev3", 00:09:10.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.524 "is_configured": false, 00:09:10.524 "data_offset": 0, 00:09:10.524 "data_size": 0 00:09:10.524 }, 00:09:10.524 { 00:09:10.524 "name": "BaseBdev4", 00:09:10.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.524 "is_configured": false, 00:09:10.524 "data_offset": 0, 00:09:10.524 "data_size": 0 00:09:10.524 } 00:09:10.524 ] 00:09:10.524 }' 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.524 14:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.781 [2024-10-01 14:34:02.294506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.781 BaseBdev2 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.781 [ 00:09:10.781 { 00:09:10.781 "name": "BaseBdev2", 00:09:10.781 "aliases": [ 00:09:10.781 "633532bc-4228-4288-86b5-5041fb955bbb" 00:09:10.781 ], 00:09:10.781 "product_name": "Malloc disk", 00:09:10.781 "block_size": 512, 00:09:10.781 "num_blocks": 65536, 00:09:10.781 "uuid": "633532bc-4228-4288-86b5-5041fb955bbb", 00:09:10.781 "assigned_rate_limits": { 00:09:10.781 "rw_ios_per_sec": 0, 00:09:10.781 "rw_mbytes_per_sec": 0, 00:09:10.781 "r_mbytes_per_sec": 0, 00:09:10.781 "w_mbytes_per_sec": 0 00:09:10.781 }, 00:09:10.781 "claimed": true, 00:09:10.781 "claim_type": "exclusive_write", 00:09:10.781 "zoned": false, 00:09:10.781 "supported_io_types": { 00:09:10.781 "read": true, 00:09:10.781 "write": true, 00:09:10.781 "unmap": true, 00:09:10.781 "flush": true, 00:09:10.781 "reset": true, 00:09:10.781 "nvme_admin": false, 00:09:10.781 "nvme_io": false, 00:09:10.781 "nvme_io_md": false, 00:09:10.781 "write_zeroes": true, 00:09:10.781 "zcopy": true, 00:09:10.781 "get_zone_info": false, 00:09:10.781 "zone_management": false, 00:09:10.781 "zone_append": false, 00:09:10.781 "compare": false, 00:09:10.781 "compare_and_write": false, 00:09:10.781 "abort": true, 00:09:10.781 "seek_hole": false, 00:09:10.781 "seek_data": false, 00:09:10.781 "copy": true, 00:09:10.781 "nvme_iov_md": false 00:09:10.781 }, 00:09:10.781 "memory_domains": [ 00:09:10.781 { 00:09:10.781 "dma_device_id": "system", 00:09:10.781 "dma_device_type": 1 00:09:10.781 }, 00:09:10.781 { 00:09:10.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.781 "dma_device_type": 2 00:09:10.781 } 00:09:10.781 ], 00:09:10.781 "driver_specific": {} 00:09:10.781 } 00:09:10.781 ] 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.781 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.781 "name": "Existed_Raid", 00:09:10.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.781 "strip_size_kb": 0, 00:09:10.781 "state": "configuring", 00:09:10.781 "raid_level": "raid1", 00:09:10.781 "superblock": false, 00:09:10.781 "num_base_bdevs": 4, 00:09:10.781 "num_base_bdevs_discovered": 2, 00:09:10.781 "num_base_bdevs_operational": 4, 00:09:10.781 "base_bdevs_list": [ 00:09:10.781 { 00:09:10.781 "name": "BaseBdev1", 00:09:10.781 "uuid": "208343c4-7704-47a5-820b-bec6e8820107", 00:09:10.781 "is_configured": true, 00:09:10.781 "data_offset": 0, 00:09:10.781 "data_size": 65536 00:09:10.781 }, 00:09:10.781 { 00:09:10.782 "name": "BaseBdev2", 00:09:10.782 "uuid": "633532bc-4228-4288-86b5-5041fb955bbb", 00:09:10.782 "is_configured": true, 00:09:10.782 "data_offset": 0, 00:09:10.782 "data_size": 65536 00:09:10.782 }, 00:09:10.782 { 00:09:10.782 "name": "BaseBdev3", 00:09:10.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.782 "is_configured": false, 00:09:10.782 "data_offset": 0, 00:09:10.782 "data_size": 0 00:09:10.782 }, 00:09:10.782 { 00:09:10.782 "name": "BaseBdev4", 00:09:10.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.782 "is_configured": false, 00:09:10.782 "data_offset": 0, 00:09:10.782 "data_size": 0 00:09:10.782 } 00:09:10.782 ] 00:09:10.782 }' 00:09:10.782 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.782 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.039 [2024-10-01 14:34:02.694066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.039 BaseBdev3 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.039 [ 00:09:11.039 { 00:09:11.039 "name": "BaseBdev3", 00:09:11.039 "aliases": [ 00:09:11.039 "c7ac7170-bd1a-4077-ab86-e00d5f76b5f2" 00:09:11.039 ], 00:09:11.039 "product_name": "Malloc disk", 00:09:11.039 "block_size": 512, 00:09:11.039 "num_blocks": 65536, 00:09:11.039 "uuid": "c7ac7170-bd1a-4077-ab86-e00d5f76b5f2", 00:09:11.039 "assigned_rate_limits": { 00:09:11.039 "rw_ios_per_sec": 0, 00:09:11.039 "rw_mbytes_per_sec": 0, 00:09:11.039 "r_mbytes_per_sec": 0, 00:09:11.039 "w_mbytes_per_sec": 0 00:09:11.039 }, 00:09:11.039 "claimed": true, 00:09:11.039 "claim_type": "exclusive_write", 00:09:11.039 "zoned": false, 00:09:11.039 "supported_io_types": { 00:09:11.039 "read": true, 00:09:11.039 "write": true, 00:09:11.039 "unmap": true, 00:09:11.039 "flush": true, 00:09:11.039 "reset": true, 00:09:11.039 "nvme_admin": false, 00:09:11.039 "nvme_io": false, 00:09:11.039 "nvme_io_md": false, 00:09:11.039 "write_zeroes": true, 00:09:11.039 "zcopy": true, 00:09:11.039 "get_zone_info": false, 00:09:11.039 "zone_management": false, 00:09:11.039 "zone_append": false, 00:09:11.039 "compare": false, 00:09:11.039 "compare_and_write": false, 00:09:11.039 "abort": true, 00:09:11.039 "seek_hole": false, 00:09:11.039 "seek_data": false, 00:09:11.039 "copy": true, 00:09:11.039 "nvme_iov_md": false 00:09:11.039 }, 00:09:11.039 "memory_domains": [ 00:09:11.039 { 00:09:11.039 "dma_device_id": "system", 00:09:11.039 "dma_device_type": 1 00:09:11.039 }, 00:09:11.039 { 00:09:11.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.039 "dma_device_type": 2 00:09:11.039 } 00:09:11.039 ], 00:09:11.039 "driver_specific": {} 00:09:11.039 } 00:09:11.039 ] 00:09:11.039 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.040 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:11.040 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:11.040 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.040 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:11.040 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.040 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.040 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.040 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.040 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.040 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.040 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.040 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.040 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.297 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.297 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.297 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.297 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.297 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.297 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.297 "name": "Existed_Raid", 00:09:11.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.297 "strip_size_kb": 0, 00:09:11.297 "state": "configuring", 00:09:11.297 "raid_level": "raid1", 00:09:11.297 "superblock": false, 00:09:11.297 "num_base_bdevs": 4, 00:09:11.297 "num_base_bdevs_discovered": 3, 00:09:11.297 "num_base_bdevs_operational": 4, 00:09:11.297 "base_bdevs_list": [ 00:09:11.297 { 00:09:11.297 "name": "BaseBdev1", 00:09:11.297 "uuid": "208343c4-7704-47a5-820b-bec6e8820107", 00:09:11.297 "is_configured": true, 00:09:11.297 "data_offset": 0, 00:09:11.297 "data_size": 65536 00:09:11.297 }, 00:09:11.297 { 00:09:11.297 "name": "BaseBdev2", 00:09:11.297 "uuid": "633532bc-4228-4288-86b5-5041fb955bbb", 00:09:11.297 "is_configured": true, 00:09:11.297 "data_offset": 0, 00:09:11.297 "data_size": 65536 00:09:11.297 }, 00:09:11.297 { 00:09:11.297 "name": "BaseBdev3", 00:09:11.297 "uuid": "c7ac7170-bd1a-4077-ab86-e00d5f76b5f2", 00:09:11.297 "is_configured": true, 00:09:11.297 "data_offset": 0, 00:09:11.297 "data_size": 65536 00:09:11.297 }, 00:09:11.297 { 00:09:11.297 "name": "BaseBdev4", 00:09:11.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.297 "is_configured": false, 00:09:11.297 "data_offset": 0, 00:09:11.297 "data_size": 0 00:09:11.297 } 00:09:11.297 ] 00:09:11.297 }' 00:09:11.297 14:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.297 14:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.555 [2024-10-01 14:34:03.113485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:11.555 [2024-10-01 14:34:03.113736] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:11.555 [2024-10-01 14:34:03.113773] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:11.555 [2024-10-01 14:34:03.114101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:11.555 [2024-10-01 14:34:03.114334] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:11.555 [2024-10-01 14:34:03.114416] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:11.555 BaseBdev4 00:09:11.555 [2024-10-01 14:34:03.114729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.555 [ 00:09:11.555 { 00:09:11.555 "name": "BaseBdev4", 00:09:11.555 "aliases": [ 00:09:11.555 "c90a5c2c-820c-4594-85af-9279f1771ee0" 00:09:11.555 ], 00:09:11.555 "product_name": "Malloc disk", 00:09:11.555 "block_size": 512, 00:09:11.555 "num_blocks": 65536, 00:09:11.555 "uuid": "c90a5c2c-820c-4594-85af-9279f1771ee0", 00:09:11.555 "assigned_rate_limits": { 00:09:11.555 "rw_ios_per_sec": 0, 00:09:11.555 "rw_mbytes_per_sec": 0, 00:09:11.555 "r_mbytes_per_sec": 0, 00:09:11.555 "w_mbytes_per_sec": 0 00:09:11.555 }, 00:09:11.555 "claimed": true, 00:09:11.555 "claim_type": "exclusive_write", 00:09:11.555 "zoned": false, 00:09:11.555 "supported_io_types": { 00:09:11.555 "read": true, 00:09:11.555 "write": true, 00:09:11.555 "unmap": true, 00:09:11.555 "flush": true, 00:09:11.555 "reset": true, 00:09:11.555 "nvme_admin": false, 00:09:11.555 "nvme_io": false, 00:09:11.555 "nvme_io_md": false, 00:09:11.555 "write_zeroes": true, 00:09:11.555 "zcopy": true, 00:09:11.555 "get_zone_info": false, 00:09:11.555 "zone_management": false, 00:09:11.555 "zone_append": false, 00:09:11.555 "compare": false, 00:09:11.555 "compare_and_write": false, 00:09:11.555 "abort": true, 00:09:11.555 "seek_hole": false, 00:09:11.555 "seek_data": false, 00:09:11.555 "copy": true, 00:09:11.555 "nvme_iov_md": false 00:09:11.555 }, 00:09:11.555 "memory_domains": [ 00:09:11.555 { 00:09:11.555 "dma_device_id": "system", 00:09:11.555 "dma_device_type": 1 00:09:11.555 }, 00:09:11.555 { 00:09:11.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.555 "dma_device_type": 2 00:09:11.555 } 00:09:11.555 ], 00:09:11.555 "driver_specific": {} 00:09:11.555 } 00:09:11.555 ] 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.555 "name": "Existed_Raid", 00:09:11.555 "uuid": "fdb4e234-e90b-48d3-a038-f5ee86983000", 00:09:11.555 "strip_size_kb": 0, 00:09:11.555 "state": "online", 00:09:11.555 "raid_level": "raid1", 00:09:11.555 "superblock": false, 00:09:11.555 "num_base_bdevs": 4, 00:09:11.555 "num_base_bdevs_discovered": 4, 00:09:11.555 "num_base_bdevs_operational": 4, 00:09:11.555 "base_bdevs_list": [ 00:09:11.555 { 00:09:11.555 "name": "BaseBdev1", 00:09:11.555 "uuid": "208343c4-7704-47a5-820b-bec6e8820107", 00:09:11.555 "is_configured": true, 00:09:11.555 "data_offset": 0, 00:09:11.555 "data_size": 65536 00:09:11.555 }, 00:09:11.555 { 00:09:11.555 "name": "BaseBdev2", 00:09:11.555 "uuid": "633532bc-4228-4288-86b5-5041fb955bbb", 00:09:11.555 "is_configured": true, 00:09:11.555 "data_offset": 0, 00:09:11.555 "data_size": 65536 00:09:11.555 }, 00:09:11.555 { 00:09:11.555 "name": "BaseBdev3", 00:09:11.555 "uuid": "c7ac7170-bd1a-4077-ab86-e00d5f76b5f2", 00:09:11.555 "is_configured": true, 00:09:11.555 "data_offset": 0, 00:09:11.555 "data_size": 65536 00:09:11.555 }, 00:09:11.555 { 00:09:11.555 "name": "BaseBdev4", 00:09:11.555 "uuid": "c90a5c2c-820c-4594-85af-9279f1771ee0", 00:09:11.555 "is_configured": true, 00:09:11.555 "data_offset": 0, 00:09:11.555 "data_size": 65536 00:09:11.555 } 00:09:11.555 ] 00:09:11.555 }' 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.555 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.813 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.813 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.813 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.813 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.813 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.813 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.813 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.813 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.813 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.813 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.071 [2024-10-01 14:34:03.494010] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.071 "name": "Existed_Raid", 00:09:12.071 "aliases": [ 00:09:12.071 "fdb4e234-e90b-48d3-a038-f5ee86983000" 00:09:12.071 ], 00:09:12.071 "product_name": "Raid Volume", 00:09:12.071 "block_size": 512, 00:09:12.071 "num_blocks": 65536, 00:09:12.071 "uuid": "fdb4e234-e90b-48d3-a038-f5ee86983000", 00:09:12.071 "assigned_rate_limits": { 00:09:12.071 "rw_ios_per_sec": 0, 00:09:12.071 "rw_mbytes_per_sec": 0, 00:09:12.071 "r_mbytes_per_sec": 0, 00:09:12.071 "w_mbytes_per_sec": 0 00:09:12.071 }, 00:09:12.071 "claimed": false, 00:09:12.071 "zoned": false, 00:09:12.071 "supported_io_types": { 00:09:12.071 "read": true, 00:09:12.071 "write": true, 00:09:12.071 "unmap": false, 00:09:12.071 "flush": false, 00:09:12.071 "reset": true, 00:09:12.071 "nvme_admin": false, 00:09:12.071 "nvme_io": false, 00:09:12.071 "nvme_io_md": false, 00:09:12.071 "write_zeroes": true, 00:09:12.071 "zcopy": false, 00:09:12.071 "get_zone_info": false, 00:09:12.071 "zone_management": false, 00:09:12.071 "zone_append": false, 00:09:12.071 "compare": false, 00:09:12.071 "compare_and_write": false, 00:09:12.071 "abort": false, 00:09:12.071 "seek_hole": false, 00:09:12.071 "seek_data": false, 00:09:12.071 "copy": false, 00:09:12.071 "nvme_iov_md": false 00:09:12.071 }, 00:09:12.071 "memory_domains": [ 00:09:12.071 { 00:09:12.071 "dma_device_id": "system", 00:09:12.071 "dma_device_type": 1 00:09:12.071 }, 00:09:12.071 { 00:09:12.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.071 "dma_device_type": 2 00:09:12.071 }, 00:09:12.071 { 00:09:12.071 "dma_device_id": "system", 00:09:12.071 "dma_device_type": 1 00:09:12.071 }, 00:09:12.071 { 00:09:12.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.071 "dma_device_type": 2 00:09:12.071 }, 00:09:12.071 { 00:09:12.071 "dma_device_id": "system", 00:09:12.071 "dma_device_type": 1 00:09:12.071 }, 00:09:12.071 { 00:09:12.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.071 "dma_device_type": 2 00:09:12.071 }, 00:09:12.071 { 00:09:12.071 "dma_device_id": "system", 00:09:12.071 "dma_device_type": 1 00:09:12.071 }, 00:09:12.071 { 00:09:12.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.071 "dma_device_type": 2 00:09:12.071 } 00:09:12.071 ], 00:09:12.071 "driver_specific": { 00:09:12.071 "raid": { 00:09:12.071 "uuid": "fdb4e234-e90b-48d3-a038-f5ee86983000", 00:09:12.071 "strip_size_kb": 0, 00:09:12.071 "state": "online", 00:09:12.071 "raid_level": "raid1", 00:09:12.071 "superblock": false, 00:09:12.071 "num_base_bdevs": 4, 00:09:12.071 "num_base_bdevs_discovered": 4, 00:09:12.071 "num_base_bdevs_operational": 4, 00:09:12.071 "base_bdevs_list": [ 00:09:12.071 { 00:09:12.071 "name": "BaseBdev1", 00:09:12.071 "uuid": "208343c4-7704-47a5-820b-bec6e8820107", 00:09:12.071 "is_configured": true, 00:09:12.071 "data_offset": 0, 00:09:12.071 "data_size": 65536 00:09:12.071 }, 00:09:12.071 { 00:09:12.071 "name": "BaseBdev2", 00:09:12.071 "uuid": "633532bc-4228-4288-86b5-5041fb955bbb", 00:09:12.071 "is_configured": true, 00:09:12.071 "data_offset": 0, 00:09:12.071 "data_size": 65536 00:09:12.071 }, 00:09:12.071 { 00:09:12.071 "name": "BaseBdev3", 00:09:12.071 "uuid": "c7ac7170-bd1a-4077-ab86-e00d5f76b5f2", 00:09:12.071 "is_configured": true, 00:09:12.071 "data_offset": 0, 00:09:12.071 "data_size": 65536 00:09:12.071 }, 00:09:12.071 { 00:09:12.071 "name": "BaseBdev4", 00:09:12.071 "uuid": "c90a5c2c-820c-4594-85af-9279f1771ee0", 00:09:12.071 "is_configured": true, 00:09:12.071 "data_offset": 0, 00:09:12.071 "data_size": 65536 00:09:12.071 } 00:09:12.071 ] 00:09:12.071 } 00:09:12.071 } 00:09:12.071 }' 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:12.071 BaseBdev2 00:09:12.071 BaseBdev3 00:09:12.071 BaseBdev4' 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.071 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.072 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.072 [2024-10-01 14:34:03.717764] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.329 "name": "Existed_Raid", 00:09:12.329 "uuid": "fdb4e234-e90b-48d3-a038-f5ee86983000", 00:09:12.329 "strip_size_kb": 0, 00:09:12.329 "state": "online", 00:09:12.329 "raid_level": "raid1", 00:09:12.329 "superblock": false, 00:09:12.329 "num_base_bdevs": 4, 00:09:12.329 "num_base_bdevs_discovered": 3, 00:09:12.329 "num_base_bdevs_operational": 3, 00:09:12.329 "base_bdevs_list": [ 00:09:12.329 { 00:09:12.329 "name": null, 00:09:12.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.329 "is_configured": false, 00:09:12.329 "data_offset": 0, 00:09:12.329 "data_size": 65536 00:09:12.329 }, 00:09:12.329 { 00:09:12.329 "name": "BaseBdev2", 00:09:12.329 "uuid": "633532bc-4228-4288-86b5-5041fb955bbb", 00:09:12.329 "is_configured": true, 00:09:12.329 "data_offset": 0, 00:09:12.329 "data_size": 65536 00:09:12.329 }, 00:09:12.329 { 00:09:12.329 "name": "BaseBdev3", 00:09:12.329 "uuid": "c7ac7170-bd1a-4077-ab86-e00d5f76b5f2", 00:09:12.329 "is_configured": true, 00:09:12.329 "data_offset": 0, 00:09:12.329 "data_size": 65536 00:09:12.329 }, 00:09:12.329 { 00:09:12.329 "name": "BaseBdev4", 00:09:12.329 "uuid": "c90a5c2c-820c-4594-85af-9279f1771ee0", 00:09:12.329 "is_configured": true, 00:09:12.329 "data_offset": 0, 00:09:12.329 "data_size": 65536 00:09:12.329 } 00:09:12.329 ] 00:09:12.329 }' 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.329 14:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.587 [2024-10-01 14:34:04.102128] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.587 [2024-10-01 14:34:04.203641] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.587 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.844 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.844 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.844 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.844 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:12.844 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.844 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.844 [2024-10-01 14:34:04.304442] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:12.844 [2024-10-01 14:34:04.304628] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.844 [2024-10-01 14:34:04.366822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.844 [2024-10-01 14:34:04.367040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.844 [2024-10-01 14:34:04.367060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:12.844 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.844 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.844 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.844 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.844 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.844 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.845 BaseBdev2 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.845 [ 00:09:12.845 { 00:09:12.845 "name": "BaseBdev2", 00:09:12.845 "aliases": [ 00:09:12.845 "4eb52b13-7083-4828-bf71-fa578e8943fb" 00:09:12.845 ], 00:09:12.845 "product_name": "Malloc disk", 00:09:12.845 "block_size": 512, 00:09:12.845 "num_blocks": 65536, 00:09:12.845 "uuid": "4eb52b13-7083-4828-bf71-fa578e8943fb", 00:09:12.845 "assigned_rate_limits": { 00:09:12.845 "rw_ios_per_sec": 0, 00:09:12.845 "rw_mbytes_per_sec": 0, 00:09:12.845 "r_mbytes_per_sec": 0, 00:09:12.845 "w_mbytes_per_sec": 0 00:09:12.845 }, 00:09:12.845 "claimed": false, 00:09:12.845 "zoned": false, 00:09:12.845 "supported_io_types": { 00:09:12.845 "read": true, 00:09:12.845 "write": true, 00:09:12.845 "unmap": true, 00:09:12.845 "flush": true, 00:09:12.845 "reset": true, 00:09:12.845 "nvme_admin": false, 00:09:12.845 "nvme_io": false, 00:09:12.845 "nvme_io_md": false, 00:09:12.845 "write_zeroes": true, 00:09:12.845 "zcopy": true, 00:09:12.845 "get_zone_info": false, 00:09:12.845 "zone_management": false, 00:09:12.845 "zone_append": false, 00:09:12.845 "compare": false, 00:09:12.845 "compare_and_write": false, 00:09:12.845 "abort": true, 00:09:12.845 "seek_hole": false, 00:09:12.845 "seek_data": false, 00:09:12.845 "copy": true, 00:09:12.845 "nvme_iov_md": false 00:09:12.845 }, 00:09:12.845 "memory_domains": [ 00:09:12.845 { 00:09:12.845 "dma_device_id": "system", 00:09:12.845 "dma_device_type": 1 00:09:12.845 }, 00:09:12.845 { 00:09:12.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.845 "dma_device_type": 2 00:09:12.845 } 00:09:12.845 ], 00:09:12.845 "driver_specific": {} 00:09:12.845 } 00:09:12.845 ] 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.845 BaseBdev3 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.845 [ 00:09:12.845 { 00:09:12.845 "name": "BaseBdev3", 00:09:12.845 "aliases": [ 00:09:12.845 "162a967f-5ee4-435f-880e-2fed0344e493" 00:09:12.845 ], 00:09:12.845 "product_name": "Malloc disk", 00:09:12.845 "block_size": 512, 00:09:12.845 "num_blocks": 65536, 00:09:12.845 "uuid": "162a967f-5ee4-435f-880e-2fed0344e493", 00:09:12.845 "assigned_rate_limits": { 00:09:12.845 "rw_ios_per_sec": 0, 00:09:12.845 "rw_mbytes_per_sec": 0, 00:09:12.845 "r_mbytes_per_sec": 0, 00:09:12.845 "w_mbytes_per_sec": 0 00:09:12.845 }, 00:09:12.845 "claimed": false, 00:09:12.845 "zoned": false, 00:09:12.845 "supported_io_types": { 00:09:12.845 "read": true, 00:09:12.845 "write": true, 00:09:12.845 "unmap": true, 00:09:12.845 "flush": true, 00:09:12.845 "reset": true, 00:09:12.845 "nvme_admin": false, 00:09:12.845 "nvme_io": false, 00:09:12.845 "nvme_io_md": false, 00:09:12.845 "write_zeroes": true, 00:09:12.845 "zcopy": true, 00:09:12.845 "get_zone_info": false, 00:09:12.845 "zone_management": false, 00:09:12.845 "zone_append": false, 00:09:12.845 "compare": false, 00:09:12.845 "compare_and_write": false, 00:09:12.845 "abort": true, 00:09:12.845 "seek_hole": false, 00:09:12.845 "seek_data": false, 00:09:12.845 "copy": true, 00:09:12.845 "nvme_iov_md": false 00:09:12.845 }, 00:09:12.845 "memory_domains": [ 00:09:12.845 { 00:09:12.845 "dma_device_id": "system", 00:09:12.845 "dma_device_type": 1 00:09:12.845 }, 00:09:12.845 { 00:09:12.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.845 "dma_device_type": 2 00:09:12.845 } 00:09:12.845 ], 00:09:12.845 "driver_specific": {} 00:09:12.845 } 00:09:12.845 ] 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.845 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.103 BaseBdev4 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.103 [ 00:09:13.103 { 00:09:13.103 "name": "BaseBdev4", 00:09:13.103 "aliases": [ 00:09:13.103 "9d717fd8-a32f-4e93-b02a-187c16c77eff" 00:09:13.103 ], 00:09:13.103 "product_name": "Malloc disk", 00:09:13.103 "block_size": 512, 00:09:13.103 "num_blocks": 65536, 00:09:13.103 "uuid": "9d717fd8-a32f-4e93-b02a-187c16c77eff", 00:09:13.103 "assigned_rate_limits": { 00:09:13.103 "rw_ios_per_sec": 0, 00:09:13.103 "rw_mbytes_per_sec": 0, 00:09:13.103 "r_mbytes_per_sec": 0, 00:09:13.103 "w_mbytes_per_sec": 0 00:09:13.103 }, 00:09:13.103 "claimed": false, 00:09:13.103 "zoned": false, 00:09:13.103 "supported_io_types": { 00:09:13.103 "read": true, 00:09:13.103 "write": true, 00:09:13.103 "unmap": true, 00:09:13.103 "flush": true, 00:09:13.103 "reset": true, 00:09:13.103 "nvme_admin": false, 00:09:13.103 "nvme_io": false, 00:09:13.103 "nvme_io_md": false, 00:09:13.103 "write_zeroes": true, 00:09:13.103 "zcopy": true, 00:09:13.103 "get_zone_info": false, 00:09:13.103 "zone_management": false, 00:09:13.103 "zone_append": false, 00:09:13.103 "compare": false, 00:09:13.103 "compare_and_write": false, 00:09:13.103 "abort": true, 00:09:13.103 "seek_hole": false, 00:09:13.103 "seek_data": false, 00:09:13.103 "copy": true, 00:09:13.103 "nvme_iov_md": false 00:09:13.103 }, 00:09:13.103 "memory_domains": [ 00:09:13.103 { 00:09:13.103 "dma_device_id": "system", 00:09:13.103 "dma_device_type": 1 00:09:13.103 }, 00:09:13.103 { 00:09:13.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.103 "dma_device_type": 2 00:09:13.103 } 00:09:13.103 ], 00:09:13.103 "driver_specific": {} 00:09:13.103 } 00:09:13.103 ] 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.103 [2024-10-01 14:34:04.577360] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.103 [2024-10-01 14:34:04.577541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.103 [2024-10-01 14:34:04.577612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.103 [2024-10-01 14:34:04.579519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:13.103 [2024-10-01 14:34:04.579651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.103 "name": "Existed_Raid", 00:09:13.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.103 "strip_size_kb": 0, 00:09:13.103 "state": "configuring", 00:09:13.103 "raid_level": "raid1", 00:09:13.103 "superblock": false, 00:09:13.103 "num_base_bdevs": 4, 00:09:13.103 "num_base_bdevs_discovered": 3, 00:09:13.103 "num_base_bdevs_operational": 4, 00:09:13.103 "base_bdevs_list": [ 00:09:13.103 { 00:09:13.103 "name": "BaseBdev1", 00:09:13.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.103 "is_configured": false, 00:09:13.103 "data_offset": 0, 00:09:13.103 "data_size": 0 00:09:13.103 }, 00:09:13.103 { 00:09:13.103 "name": "BaseBdev2", 00:09:13.103 "uuid": "4eb52b13-7083-4828-bf71-fa578e8943fb", 00:09:13.103 "is_configured": true, 00:09:13.103 "data_offset": 0, 00:09:13.103 "data_size": 65536 00:09:13.103 }, 00:09:13.103 { 00:09:13.103 "name": "BaseBdev3", 00:09:13.103 "uuid": "162a967f-5ee4-435f-880e-2fed0344e493", 00:09:13.103 "is_configured": true, 00:09:13.103 "data_offset": 0, 00:09:13.103 "data_size": 65536 00:09:13.103 }, 00:09:13.103 { 00:09:13.103 "name": "BaseBdev4", 00:09:13.103 "uuid": "9d717fd8-a32f-4e93-b02a-187c16c77eff", 00:09:13.103 "is_configured": true, 00:09:13.103 "data_offset": 0, 00:09:13.103 "data_size": 65536 00:09:13.103 } 00:09:13.103 ] 00:09:13.103 }' 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.103 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.361 [2024-10-01 14:34:04.913447] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.361 "name": "Existed_Raid", 00:09:13.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.361 "strip_size_kb": 0, 00:09:13.361 "state": "configuring", 00:09:13.361 "raid_level": "raid1", 00:09:13.361 "superblock": false, 00:09:13.361 "num_base_bdevs": 4, 00:09:13.361 "num_base_bdevs_discovered": 2, 00:09:13.361 "num_base_bdevs_operational": 4, 00:09:13.361 "base_bdevs_list": [ 00:09:13.361 { 00:09:13.361 "name": "BaseBdev1", 00:09:13.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.361 "is_configured": false, 00:09:13.361 "data_offset": 0, 00:09:13.361 "data_size": 0 00:09:13.361 }, 00:09:13.361 { 00:09:13.361 "name": null, 00:09:13.361 "uuid": "4eb52b13-7083-4828-bf71-fa578e8943fb", 00:09:13.361 "is_configured": false, 00:09:13.361 "data_offset": 0, 00:09:13.361 "data_size": 65536 00:09:13.361 }, 00:09:13.361 { 00:09:13.361 "name": "BaseBdev3", 00:09:13.361 "uuid": "162a967f-5ee4-435f-880e-2fed0344e493", 00:09:13.361 "is_configured": true, 00:09:13.361 "data_offset": 0, 00:09:13.361 "data_size": 65536 00:09:13.361 }, 00:09:13.361 { 00:09:13.361 "name": "BaseBdev4", 00:09:13.361 "uuid": "9d717fd8-a32f-4e93-b02a-187c16c77eff", 00:09:13.361 "is_configured": true, 00:09:13.361 "data_offset": 0, 00:09:13.361 "data_size": 65536 00:09:13.361 } 00:09:13.361 ] 00:09:13.361 }' 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.361 14:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.927 [2024-10-01 14:34:05.364059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.927 BaseBdev1 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.927 [ 00:09:13.927 { 00:09:13.927 "name": "BaseBdev1", 00:09:13.927 "aliases": [ 00:09:13.927 "292a2612-d286-4f2c-bc0d-455f9c21b877" 00:09:13.927 ], 00:09:13.927 "product_name": "Malloc disk", 00:09:13.927 "block_size": 512, 00:09:13.927 "num_blocks": 65536, 00:09:13.927 "uuid": "292a2612-d286-4f2c-bc0d-455f9c21b877", 00:09:13.927 "assigned_rate_limits": { 00:09:13.927 "rw_ios_per_sec": 0, 00:09:13.927 "rw_mbytes_per_sec": 0, 00:09:13.927 "r_mbytes_per_sec": 0, 00:09:13.927 "w_mbytes_per_sec": 0 00:09:13.927 }, 00:09:13.927 "claimed": true, 00:09:13.927 "claim_type": "exclusive_write", 00:09:13.927 "zoned": false, 00:09:13.927 "supported_io_types": { 00:09:13.927 "read": true, 00:09:13.927 "write": true, 00:09:13.927 "unmap": true, 00:09:13.927 "flush": true, 00:09:13.927 "reset": true, 00:09:13.927 "nvme_admin": false, 00:09:13.927 "nvme_io": false, 00:09:13.927 "nvme_io_md": false, 00:09:13.927 "write_zeroes": true, 00:09:13.927 "zcopy": true, 00:09:13.927 "get_zone_info": false, 00:09:13.927 "zone_management": false, 00:09:13.927 "zone_append": false, 00:09:13.927 "compare": false, 00:09:13.927 "compare_and_write": false, 00:09:13.927 "abort": true, 00:09:13.927 "seek_hole": false, 00:09:13.927 "seek_data": false, 00:09:13.927 "copy": true, 00:09:13.927 "nvme_iov_md": false 00:09:13.927 }, 00:09:13.927 "memory_domains": [ 00:09:13.927 { 00:09:13.927 "dma_device_id": "system", 00:09:13.927 "dma_device_type": 1 00:09:13.927 }, 00:09:13.927 { 00:09:13.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.927 "dma_device_type": 2 00:09:13.927 } 00:09:13.927 ], 00:09:13.927 "driver_specific": {} 00:09:13.927 } 00:09:13.927 ] 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.927 "name": "Existed_Raid", 00:09:13.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.927 "strip_size_kb": 0, 00:09:13.927 "state": "configuring", 00:09:13.927 "raid_level": "raid1", 00:09:13.927 "superblock": false, 00:09:13.927 "num_base_bdevs": 4, 00:09:13.927 "num_base_bdevs_discovered": 3, 00:09:13.927 "num_base_bdevs_operational": 4, 00:09:13.927 "base_bdevs_list": [ 00:09:13.927 { 00:09:13.927 "name": "BaseBdev1", 00:09:13.927 "uuid": "292a2612-d286-4f2c-bc0d-455f9c21b877", 00:09:13.927 "is_configured": true, 00:09:13.927 "data_offset": 0, 00:09:13.927 "data_size": 65536 00:09:13.927 }, 00:09:13.927 { 00:09:13.927 "name": null, 00:09:13.927 "uuid": "4eb52b13-7083-4828-bf71-fa578e8943fb", 00:09:13.927 "is_configured": false, 00:09:13.927 "data_offset": 0, 00:09:13.927 "data_size": 65536 00:09:13.927 }, 00:09:13.927 { 00:09:13.927 "name": "BaseBdev3", 00:09:13.927 "uuid": "162a967f-5ee4-435f-880e-2fed0344e493", 00:09:13.927 "is_configured": true, 00:09:13.927 "data_offset": 0, 00:09:13.927 "data_size": 65536 00:09:13.927 }, 00:09:13.927 { 00:09:13.927 "name": "BaseBdev4", 00:09:13.927 "uuid": "9d717fd8-a32f-4e93-b02a-187c16c77eff", 00:09:13.927 "is_configured": true, 00:09:13.927 "data_offset": 0, 00:09:13.927 "data_size": 65536 00:09:13.927 } 00:09:13.927 ] 00:09:13.927 }' 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.927 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.185 [2024-10-01 14:34:05.768240] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.185 "name": "Existed_Raid", 00:09:14.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.185 "strip_size_kb": 0, 00:09:14.185 "state": "configuring", 00:09:14.185 "raid_level": "raid1", 00:09:14.185 "superblock": false, 00:09:14.185 "num_base_bdevs": 4, 00:09:14.185 "num_base_bdevs_discovered": 2, 00:09:14.185 "num_base_bdevs_operational": 4, 00:09:14.185 "base_bdevs_list": [ 00:09:14.185 { 00:09:14.185 "name": "BaseBdev1", 00:09:14.185 "uuid": "292a2612-d286-4f2c-bc0d-455f9c21b877", 00:09:14.185 "is_configured": true, 00:09:14.185 "data_offset": 0, 00:09:14.185 "data_size": 65536 00:09:14.185 }, 00:09:14.185 { 00:09:14.185 "name": null, 00:09:14.185 "uuid": "4eb52b13-7083-4828-bf71-fa578e8943fb", 00:09:14.185 "is_configured": false, 00:09:14.185 "data_offset": 0, 00:09:14.185 "data_size": 65536 00:09:14.185 }, 00:09:14.185 { 00:09:14.185 "name": null, 00:09:14.185 "uuid": "162a967f-5ee4-435f-880e-2fed0344e493", 00:09:14.185 "is_configured": false, 00:09:14.185 "data_offset": 0, 00:09:14.185 "data_size": 65536 00:09:14.185 }, 00:09:14.185 { 00:09:14.185 "name": "BaseBdev4", 00:09:14.185 "uuid": "9d717fd8-a32f-4e93-b02a-187c16c77eff", 00:09:14.185 "is_configured": true, 00:09:14.185 "data_offset": 0, 00:09:14.185 "data_size": 65536 00:09:14.185 } 00:09:14.185 ] 00:09:14.185 }' 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.185 14:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.442 [2024-10-01 14:34:06.108313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.442 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.443 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.443 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.700 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.700 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.700 "name": "Existed_Raid", 00:09:14.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.700 "strip_size_kb": 0, 00:09:14.700 "state": "configuring", 00:09:14.700 "raid_level": "raid1", 00:09:14.700 "superblock": false, 00:09:14.700 "num_base_bdevs": 4, 00:09:14.700 "num_base_bdevs_discovered": 3, 00:09:14.700 "num_base_bdevs_operational": 4, 00:09:14.700 "base_bdevs_list": [ 00:09:14.700 { 00:09:14.700 "name": "BaseBdev1", 00:09:14.700 "uuid": "292a2612-d286-4f2c-bc0d-455f9c21b877", 00:09:14.700 "is_configured": true, 00:09:14.700 "data_offset": 0, 00:09:14.700 "data_size": 65536 00:09:14.700 }, 00:09:14.700 { 00:09:14.700 "name": null, 00:09:14.700 "uuid": "4eb52b13-7083-4828-bf71-fa578e8943fb", 00:09:14.700 "is_configured": false, 00:09:14.700 "data_offset": 0, 00:09:14.700 "data_size": 65536 00:09:14.700 }, 00:09:14.700 { 00:09:14.700 "name": "BaseBdev3", 00:09:14.700 "uuid": "162a967f-5ee4-435f-880e-2fed0344e493", 00:09:14.700 "is_configured": true, 00:09:14.700 "data_offset": 0, 00:09:14.700 "data_size": 65536 00:09:14.700 }, 00:09:14.700 { 00:09:14.700 "name": "BaseBdev4", 00:09:14.700 "uuid": "9d717fd8-a32f-4e93-b02a-187c16c77eff", 00:09:14.700 "is_configured": true, 00:09:14.700 "data_offset": 0, 00:09:14.700 "data_size": 65536 00:09:14.700 } 00:09:14.700 ] 00:09:14.700 }' 00:09:14.700 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.700 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.957 [2024-10-01 14:34:06.476488] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.957 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.958 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.958 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.958 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.958 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.958 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.958 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.958 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.958 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.958 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.958 "name": "Existed_Raid", 00:09:14.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.958 "strip_size_kb": 0, 00:09:14.958 "state": "configuring", 00:09:14.958 "raid_level": "raid1", 00:09:14.958 "superblock": false, 00:09:14.958 "num_base_bdevs": 4, 00:09:14.958 "num_base_bdevs_discovered": 2, 00:09:14.958 "num_base_bdevs_operational": 4, 00:09:14.958 "base_bdevs_list": [ 00:09:14.958 { 00:09:14.958 "name": null, 00:09:14.958 "uuid": "292a2612-d286-4f2c-bc0d-455f9c21b877", 00:09:14.958 "is_configured": false, 00:09:14.958 "data_offset": 0, 00:09:14.958 "data_size": 65536 00:09:14.958 }, 00:09:14.958 { 00:09:14.958 "name": null, 00:09:14.958 "uuid": "4eb52b13-7083-4828-bf71-fa578e8943fb", 00:09:14.958 "is_configured": false, 00:09:14.958 "data_offset": 0, 00:09:14.958 "data_size": 65536 00:09:14.958 }, 00:09:14.958 { 00:09:14.958 "name": "BaseBdev3", 00:09:14.958 "uuid": "162a967f-5ee4-435f-880e-2fed0344e493", 00:09:14.958 "is_configured": true, 00:09:14.958 "data_offset": 0, 00:09:14.958 "data_size": 65536 00:09:14.958 }, 00:09:14.958 { 00:09:14.958 "name": "BaseBdev4", 00:09:14.958 "uuid": "9d717fd8-a32f-4e93-b02a-187c16c77eff", 00:09:14.958 "is_configured": true, 00:09:14.958 "data_offset": 0, 00:09:14.958 "data_size": 65536 00:09:14.958 } 00:09:14.958 ] 00:09:14.958 }' 00:09:14.958 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.958 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.215 [2024-10-01 14:34:06.884346] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.215 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.473 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.473 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.473 "name": "Existed_Raid", 00:09:15.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.473 "strip_size_kb": 0, 00:09:15.473 "state": "configuring", 00:09:15.473 "raid_level": "raid1", 00:09:15.473 "superblock": false, 00:09:15.473 "num_base_bdevs": 4, 00:09:15.473 "num_base_bdevs_discovered": 3, 00:09:15.473 "num_base_bdevs_operational": 4, 00:09:15.473 "base_bdevs_list": [ 00:09:15.473 { 00:09:15.473 "name": null, 00:09:15.473 "uuid": "292a2612-d286-4f2c-bc0d-455f9c21b877", 00:09:15.473 "is_configured": false, 00:09:15.473 "data_offset": 0, 00:09:15.473 "data_size": 65536 00:09:15.473 }, 00:09:15.473 { 00:09:15.473 "name": "BaseBdev2", 00:09:15.473 "uuid": "4eb52b13-7083-4828-bf71-fa578e8943fb", 00:09:15.473 "is_configured": true, 00:09:15.473 "data_offset": 0, 00:09:15.473 "data_size": 65536 00:09:15.473 }, 00:09:15.473 { 00:09:15.473 "name": "BaseBdev3", 00:09:15.473 "uuid": "162a967f-5ee4-435f-880e-2fed0344e493", 00:09:15.473 "is_configured": true, 00:09:15.473 "data_offset": 0, 00:09:15.473 "data_size": 65536 00:09:15.473 }, 00:09:15.473 { 00:09:15.473 "name": "BaseBdev4", 00:09:15.473 "uuid": "9d717fd8-a32f-4e93-b02a-187c16c77eff", 00:09:15.473 "is_configured": true, 00:09:15.473 "data_offset": 0, 00:09:15.473 "data_size": 65536 00:09:15.473 } 00:09:15.473 ] 00:09:15.473 }' 00:09:15.473 14:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.473 14:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 292a2612-d286-4f2c-bc0d-455f9c21b877 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.731 [2024-10-01 14:34:07.293811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:15.731 [2024-10-01 14:34:07.294082] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:15.731 [2024-10-01 14:34:07.294105] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:15.731 [2024-10-01 14:34:07.294408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:15.731 [2024-10-01 14:34:07.294567] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:15.731 [2024-10-01 14:34:07.294576] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:15.731 [2024-10-01 14:34:07.294867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.731 NewBaseBdev 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.731 [ 00:09:15.731 { 00:09:15.731 "name": "NewBaseBdev", 00:09:15.731 "aliases": [ 00:09:15.731 "292a2612-d286-4f2c-bc0d-455f9c21b877" 00:09:15.731 ], 00:09:15.731 "product_name": "Malloc disk", 00:09:15.731 "block_size": 512, 00:09:15.731 "num_blocks": 65536, 00:09:15.731 "uuid": "292a2612-d286-4f2c-bc0d-455f9c21b877", 00:09:15.731 "assigned_rate_limits": { 00:09:15.731 "rw_ios_per_sec": 0, 00:09:15.731 "rw_mbytes_per_sec": 0, 00:09:15.731 "r_mbytes_per_sec": 0, 00:09:15.731 "w_mbytes_per_sec": 0 00:09:15.731 }, 00:09:15.731 "claimed": true, 00:09:15.731 "claim_type": "exclusive_write", 00:09:15.731 "zoned": false, 00:09:15.731 "supported_io_types": { 00:09:15.731 "read": true, 00:09:15.731 "write": true, 00:09:15.731 "unmap": true, 00:09:15.731 "flush": true, 00:09:15.731 "reset": true, 00:09:15.731 "nvme_admin": false, 00:09:15.731 "nvme_io": false, 00:09:15.731 "nvme_io_md": false, 00:09:15.731 "write_zeroes": true, 00:09:15.731 "zcopy": true, 00:09:15.731 "get_zone_info": false, 00:09:15.731 "zone_management": false, 00:09:15.731 "zone_append": false, 00:09:15.731 "compare": false, 00:09:15.731 "compare_and_write": false, 00:09:15.731 "abort": true, 00:09:15.731 "seek_hole": false, 00:09:15.731 "seek_data": false, 00:09:15.731 "copy": true, 00:09:15.731 "nvme_iov_md": false 00:09:15.731 }, 00:09:15.731 "memory_domains": [ 00:09:15.731 { 00:09:15.731 "dma_device_id": "system", 00:09:15.731 "dma_device_type": 1 00:09:15.731 }, 00:09:15.731 { 00:09:15.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.731 "dma_device_type": 2 00:09:15.731 } 00:09:15.731 ], 00:09:15.731 "driver_specific": {} 00:09:15.731 } 00:09:15.731 ] 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.731 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.731 "name": "Existed_Raid", 00:09:15.731 "uuid": "5baf51ae-e290-43ee-bdff-e115adbe58ed", 00:09:15.731 "strip_size_kb": 0, 00:09:15.731 "state": "online", 00:09:15.731 "raid_level": "raid1", 00:09:15.731 "superblock": false, 00:09:15.731 "num_base_bdevs": 4, 00:09:15.731 "num_base_bdevs_discovered": 4, 00:09:15.731 "num_base_bdevs_operational": 4, 00:09:15.731 "base_bdevs_list": [ 00:09:15.731 { 00:09:15.731 "name": "NewBaseBdev", 00:09:15.731 "uuid": "292a2612-d286-4f2c-bc0d-455f9c21b877", 00:09:15.731 "is_configured": true, 00:09:15.731 "data_offset": 0, 00:09:15.731 "data_size": 65536 00:09:15.731 }, 00:09:15.731 { 00:09:15.731 "name": "BaseBdev2", 00:09:15.732 "uuid": "4eb52b13-7083-4828-bf71-fa578e8943fb", 00:09:15.732 "is_configured": true, 00:09:15.732 "data_offset": 0, 00:09:15.732 "data_size": 65536 00:09:15.732 }, 00:09:15.732 { 00:09:15.732 "name": "BaseBdev3", 00:09:15.732 "uuid": "162a967f-5ee4-435f-880e-2fed0344e493", 00:09:15.732 "is_configured": true, 00:09:15.732 "data_offset": 0, 00:09:15.732 "data_size": 65536 00:09:15.732 }, 00:09:15.732 { 00:09:15.732 "name": "BaseBdev4", 00:09:15.732 "uuid": "9d717fd8-a32f-4e93-b02a-187c16c77eff", 00:09:15.732 "is_configured": true, 00:09:15.732 "data_offset": 0, 00:09:15.732 "data_size": 65536 00:09:15.732 } 00:09:15.732 ] 00:09:15.732 }' 00:09:15.732 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.732 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.060 [2024-10-01 14:34:07.630387] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.060 "name": "Existed_Raid", 00:09:16.060 "aliases": [ 00:09:16.060 "5baf51ae-e290-43ee-bdff-e115adbe58ed" 00:09:16.060 ], 00:09:16.060 "product_name": "Raid Volume", 00:09:16.060 "block_size": 512, 00:09:16.060 "num_blocks": 65536, 00:09:16.060 "uuid": "5baf51ae-e290-43ee-bdff-e115adbe58ed", 00:09:16.060 "assigned_rate_limits": { 00:09:16.060 "rw_ios_per_sec": 0, 00:09:16.060 "rw_mbytes_per_sec": 0, 00:09:16.060 "r_mbytes_per_sec": 0, 00:09:16.060 "w_mbytes_per_sec": 0 00:09:16.060 }, 00:09:16.060 "claimed": false, 00:09:16.060 "zoned": false, 00:09:16.060 "supported_io_types": { 00:09:16.060 "read": true, 00:09:16.060 "write": true, 00:09:16.060 "unmap": false, 00:09:16.060 "flush": false, 00:09:16.060 "reset": true, 00:09:16.060 "nvme_admin": false, 00:09:16.060 "nvme_io": false, 00:09:16.060 "nvme_io_md": false, 00:09:16.060 "write_zeroes": true, 00:09:16.060 "zcopy": false, 00:09:16.060 "get_zone_info": false, 00:09:16.060 "zone_management": false, 00:09:16.060 "zone_append": false, 00:09:16.060 "compare": false, 00:09:16.060 "compare_and_write": false, 00:09:16.060 "abort": false, 00:09:16.060 "seek_hole": false, 00:09:16.060 "seek_data": false, 00:09:16.060 "copy": false, 00:09:16.060 "nvme_iov_md": false 00:09:16.060 }, 00:09:16.060 "memory_domains": [ 00:09:16.060 { 00:09:16.060 "dma_device_id": "system", 00:09:16.060 "dma_device_type": 1 00:09:16.060 }, 00:09:16.060 { 00:09:16.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.060 "dma_device_type": 2 00:09:16.060 }, 00:09:16.060 { 00:09:16.060 "dma_device_id": "system", 00:09:16.060 "dma_device_type": 1 00:09:16.060 }, 00:09:16.060 { 00:09:16.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.060 "dma_device_type": 2 00:09:16.060 }, 00:09:16.060 { 00:09:16.060 "dma_device_id": "system", 00:09:16.060 "dma_device_type": 1 00:09:16.060 }, 00:09:16.060 { 00:09:16.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.060 "dma_device_type": 2 00:09:16.060 }, 00:09:16.060 { 00:09:16.060 "dma_device_id": "system", 00:09:16.060 "dma_device_type": 1 00:09:16.060 }, 00:09:16.060 { 00:09:16.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.060 "dma_device_type": 2 00:09:16.060 } 00:09:16.060 ], 00:09:16.060 "driver_specific": { 00:09:16.060 "raid": { 00:09:16.060 "uuid": "5baf51ae-e290-43ee-bdff-e115adbe58ed", 00:09:16.060 "strip_size_kb": 0, 00:09:16.060 "state": "online", 00:09:16.060 "raid_level": "raid1", 00:09:16.060 "superblock": false, 00:09:16.060 "num_base_bdevs": 4, 00:09:16.060 "num_base_bdevs_discovered": 4, 00:09:16.060 "num_base_bdevs_operational": 4, 00:09:16.060 "base_bdevs_list": [ 00:09:16.060 { 00:09:16.060 "name": "NewBaseBdev", 00:09:16.060 "uuid": "292a2612-d286-4f2c-bc0d-455f9c21b877", 00:09:16.060 "is_configured": true, 00:09:16.060 "data_offset": 0, 00:09:16.060 "data_size": 65536 00:09:16.060 }, 00:09:16.060 { 00:09:16.060 "name": "BaseBdev2", 00:09:16.060 "uuid": "4eb52b13-7083-4828-bf71-fa578e8943fb", 00:09:16.060 "is_configured": true, 00:09:16.060 "data_offset": 0, 00:09:16.060 "data_size": 65536 00:09:16.060 }, 00:09:16.060 { 00:09:16.060 "name": "BaseBdev3", 00:09:16.060 "uuid": "162a967f-5ee4-435f-880e-2fed0344e493", 00:09:16.060 "is_configured": true, 00:09:16.060 "data_offset": 0, 00:09:16.060 "data_size": 65536 00:09:16.060 }, 00:09:16.060 { 00:09:16.060 "name": "BaseBdev4", 00:09:16.060 "uuid": "9d717fd8-a32f-4e93-b02a-187c16c77eff", 00:09:16.060 "is_configured": true, 00:09:16.060 "data_offset": 0, 00:09:16.060 "data_size": 65536 00:09:16.060 } 00:09:16.060 ] 00:09:16.060 } 00:09:16.060 } 00:09:16.060 }' 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:16.060 BaseBdev2 00:09:16.060 BaseBdev3 00:09:16.060 BaseBdev4' 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.060 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.317 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.318 [2024-10-01 14:34:07.870075] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.318 [2024-10-01 14:34:07.870117] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.318 [2024-10-01 14:34:07.870205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.318 [2024-10-01 14:34:07.870522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.318 [2024-10-01 14:34:07.870535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71473 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71473 ']' 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71473 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71473 00:09:16.318 killing process with pid 71473 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71473' 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71473 00:09:16.318 14:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71473 00:09:16.318 [2024-10-01 14:34:07.893924] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.574 [2024-10-01 14:34:08.154687] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:17.504 00:09:17.504 real 0m9.007s 00:09:17.504 user 0m14.339s 00:09:17.504 sys 0m1.322s 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.504 ************************************ 00:09:17.504 END TEST raid_state_function_test 00:09:17.504 ************************************ 00:09:17.504 14:34:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:09:17.504 14:34:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:17.504 14:34:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.504 14:34:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.504 ************************************ 00:09:17.504 START TEST raid_state_function_test_sb 00:09:17.504 ************************************ 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:17.504 Process raid pid: 72118 00:09:17.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72118 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72118' 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72118 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72118 ']' 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.504 14:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:17.761 [2024-10-01 14:34:09.222721] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:09:17.761 [2024-10-01 14:34:09.222838] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.761 [2024-10-01 14:34:09.372020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.018 [2024-10-01 14:34:09.595310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.275 [2024-10-01 14:34:09.745762] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.275 [2024-10-01 14:34:09.745817] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.532 [2024-10-01 14:34:10.086600] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.532 [2024-10-01 14:34:10.086678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.532 [2024-10-01 14:34:10.086689] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.532 [2024-10-01 14:34:10.086699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.532 [2024-10-01 14:34:10.086721] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.532 [2024-10-01 14:34:10.086734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.532 [2024-10-01 14:34:10.086742] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:18.532 [2024-10-01 14:34:10.086752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.532 "name": "Existed_Raid", 00:09:18.532 "uuid": "65988d32-ec7d-44d2-ba29-120d201897d9", 00:09:18.532 "strip_size_kb": 0, 00:09:18.532 "state": "configuring", 00:09:18.532 "raid_level": "raid1", 00:09:18.532 "superblock": true, 00:09:18.532 "num_base_bdevs": 4, 00:09:18.532 "num_base_bdevs_discovered": 0, 00:09:18.532 "num_base_bdevs_operational": 4, 00:09:18.532 "base_bdevs_list": [ 00:09:18.532 { 00:09:18.532 "name": "BaseBdev1", 00:09:18.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.532 "is_configured": false, 00:09:18.532 "data_offset": 0, 00:09:18.532 "data_size": 0 00:09:18.532 }, 00:09:18.532 { 00:09:18.532 "name": "BaseBdev2", 00:09:18.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.532 "is_configured": false, 00:09:18.532 "data_offset": 0, 00:09:18.532 "data_size": 0 00:09:18.532 }, 00:09:18.532 { 00:09:18.532 "name": "BaseBdev3", 00:09:18.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.532 "is_configured": false, 00:09:18.532 "data_offset": 0, 00:09:18.532 "data_size": 0 00:09:18.532 }, 00:09:18.532 { 00:09:18.532 "name": "BaseBdev4", 00:09:18.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.532 "is_configured": false, 00:09:18.532 "data_offset": 0, 00:09:18.532 "data_size": 0 00:09:18.532 } 00:09:18.532 ] 00:09:18.532 }' 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.532 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.790 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.790 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.790 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.790 [2024-10-01 14:34:10.418564] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.790 [2024-10-01 14:34:10.418812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:18.790 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.791 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:18.791 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.791 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.791 [2024-10-01 14:34:10.426583] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.791 [2024-10-01 14:34:10.426765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.791 [2024-10-01 14:34:10.426829] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.791 [2024-10-01 14:34:10.426857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.791 [2024-10-01 14:34:10.426916] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.791 [2024-10-01 14:34:10.426943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.791 [2024-10-01 14:34:10.426961] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:18.791 [2024-10-01 14:34:10.427011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:18.791 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.791 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.791 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.791 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.048 [2024-10-01 14:34:10.476765] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.048 BaseBdev1 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.048 [ 00:09:19.048 { 00:09:19.048 "name": "BaseBdev1", 00:09:19.048 "aliases": [ 00:09:19.048 "c6246a62-0f0c-40af-a75c-2a226ba9ac6a" 00:09:19.048 ], 00:09:19.048 "product_name": "Malloc disk", 00:09:19.048 "block_size": 512, 00:09:19.048 "num_blocks": 65536, 00:09:19.048 "uuid": "c6246a62-0f0c-40af-a75c-2a226ba9ac6a", 00:09:19.048 "assigned_rate_limits": { 00:09:19.048 "rw_ios_per_sec": 0, 00:09:19.048 "rw_mbytes_per_sec": 0, 00:09:19.048 "r_mbytes_per_sec": 0, 00:09:19.048 "w_mbytes_per_sec": 0 00:09:19.048 }, 00:09:19.048 "claimed": true, 00:09:19.048 "claim_type": "exclusive_write", 00:09:19.048 "zoned": false, 00:09:19.048 "supported_io_types": { 00:09:19.048 "read": true, 00:09:19.048 "write": true, 00:09:19.048 "unmap": true, 00:09:19.048 "flush": true, 00:09:19.048 "reset": true, 00:09:19.048 "nvme_admin": false, 00:09:19.048 "nvme_io": false, 00:09:19.048 "nvme_io_md": false, 00:09:19.048 "write_zeroes": true, 00:09:19.048 "zcopy": true, 00:09:19.048 "get_zone_info": false, 00:09:19.048 "zone_management": false, 00:09:19.048 "zone_append": false, 00:09:19.048 "compare": false, 00:09:19.048 "compare_and_write": false, 00:09:19.048 "abort": true, 00:09:19.048 "seek_hole": false, 00:09:19.048 "seek_data": false, 00:09:19.048 "copy": true, 00:09:19.048 "nvme_iov_md": false 00:09:19.048 }, 00:09:19.048 "memory_domains": [ 00:09:19.048 { 00:09:19.048 "dma_device_id": "system", 00:09:19.048 "dma_device_type": 1 00:09:19.048 }, 00:09:19.048 { 00:09:19.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.048 "dma_device_type": 2 00:09:19.048 } 00:09:19.048 ], 00:09:19.048 "driver_specific": {} 00:09:19.048 } 00:09:19.048 ] 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.048 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.048 "name": "Existed_Raid", 00:09:19.048 "uuid": "8ec10f03-d1bf-495c-b6ee-4f77736b7a7a", 00:09:19.048 "strip_size_kb": 0, 00:09:19.048 "state": "configuring", 00:09:19.048 "raid_level": "raid1", 00:09:19.048 "superblock": true, 00:09:19.048 "num_base_bdevs": 4, 00:09:19.048 "num_base_bdevs_discovered": 1, 00:09:19.048 "num_base_bdevs_operational": 4, 00:09:19.048 "base_bdevs_list": [ 00:09:19.048 { 00:09:19.048 "name": "BaseBdev1", 00:09:19.048 "uuid": "c6246a62-0f0c-40af-a75c-2a226ba9ac6a", 00:09:19.048 "is_configured": true, 00:09:19.049 "data_offset": 2048, 00:09:19.049 "data_size": 63488 00:09:19.049 }, 00:09:19.049 { 00:09:19.049 "name": "BaseBdev2", 00:09:19.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.049 "is_configured": false, 00:09:19.049 "data_offset": 0, 00:09:19.049 "data_size": 0 00:09:19.049 }, 00:09:19.049 { 00:09:19.049 "name": "BaseBdev3", 00:09:19.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.049 "is_configured": false, 00:09:19.049 "data_offset": 0, 00:09:19.049 "data_size": 0 00:09:19.049 }, 00:09:19.049 { 00:09:19.049 "name": "BaseBdev4", 00:09:19.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.049 "is_configured": false, 00:09:19.049 "data_offset": 0, 00:09:19.049 "data_size": 0 00:09:19.049 } 00:09:19.049 ] 00:09:19.049 }' 00:09:19.049 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.049 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.306 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.307 [2024-10-01 14:34:10.848908] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.307 [2024-10-01 14:34:10.849154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.307 [2024-10-01 14:34:10.856945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.307 [2024-10-01 14:34:10.859141] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.307 [2024-10-01 14:34:10.859191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.307 [2024-10-01 14:34:10.859201] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.307 [2024-10-01 14:34:10.859213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.307 [2024-10-01 14:34:10.859220] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:19.307 [2024-10-01 14:34:10.859229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.307 "name": "Existed_Raid", 00:09:19.307 "uuid": "160ac279-07e0-47cd-9054-5e3479044293", 00:09:19.307 "strip_size_kb": 0, 00:09:19.307 "state": "configuring", 00:09:19.307 "raid_level": "raid1", 00:09:19.307 "superblock": true, 00:09:19.307 "num_base_bdevs": 4, 00:09:19.307 "num_base_bdevs_discovered": 1, 00:09:19.307 "num_base_bdevs_operational": 4, 00:09:19.307 "base_bdevs_list": [ 00:09:19.307 { 00:09:19.307 "name": "BaseBdev1", 00:09:19.307 "uuid": "c6246a62-0f0c-40af-a75c-2a226ba9ac6a", 00:09:19.307 "is_configured": true, 00:09:19.307 "data_offset": 2048, 00:09:19.307 "data_size": 63488 00:09:19.307 }, 00:09:19.307 { 00:09:19.307 "name": "BaseBdev2", 00:09:19.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.307 "is_configured": false, 00:09:19.307 "data_offset": 0, 00:09:19.307 "data_size": 0 00:09:19.307 }, 00:09:19.307 { 00:09:19.307 "name": "BaseBdev3", 00:09:19.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.307 "is_configured": false, 00:09:19.307 "data_offset": 0, 00:09:19.307 "data_size": 0 00:09:19.307 }, 00:09:19.307 { 00:09:19.307 "name": "BaseBdev4", 00:09:19.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.307 "is_configured": false, 00:09:19.307 "data_offset": 0, 00:09:19.307 "data_size": 0 00:09:19.307 } 00:09:19.307 ] 00:09:19.307 }' 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.307 14:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.564 [2024-10-01 14:34:11.214016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.564 BaseBdev2 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.564 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.564 [ 00:09:19.564 { 00:09:19.564 "name": "BaseBdev2", 00:09:19.564 "aliases": [ 00:09:19.564 "c2a1de77-dc3f-4be6-800d-55e53fc6ece1" 00:09:19.564 ], 00:09:19.564 "product_name": "Malloc disk", 00:09:19.564 "block_size": 512, 00:09:19.564 "num_blocks": 65536, 00:09:19.564 "uuid": "c2a1de77-dc3f-4be6-800d-55e53fc6ece1", 00:09:19.564 "assigned_rate_limits": { 00:09:19.564 "rw_ios_per_sec": 0, 00:09:19.564 "rw_mbytes_per_sec": 0, 00:09:19.564 "r_mbytes_per_sec": 0, 00:09:19.564 "w_mbytes_per_sec": 0 00:09:19.564 }, 00:09:19.564 "claimed": true, 00:09:19.564 "claim_type": "exclusive_write", 00:09:19.564 "zoned": false, 00:09:19.564 "supported_io_types": { 00:09:19.564 "read": true, 00:09:19.564 "write": true, 00:09:19.564 "unmap": true, 00:09:19.564 "flush": true, 00:09:19.564 "reset": true, 00:09:19.564 "nvme_admin": false, 00:09:19.564 "nvme_io": false, 00:09:19.564 "nvme_io_md": false, 00:09:19.564 "write_zeroes": true, 00:09:19.564 "zcopy": true, 00:09:19.564 "get_zone_info": false, 00:09:19.564 "zone_management": false, 00:09:19.565 "zone_append": false, 00:09:19.565 "compare": false, 00:09:19.565 "compare_and_write": false, 00:09:19.565 "abort": true, 00:09:19.565 "seek_hole": false, 00:09:19.565 "seek_data": false, 00:09:19.565 "copy": true, 00:09:19.565 "nvme_iov_md": false 00:09:19.565 }, 00:09:19.565 "memory_domains": [ 00:09:19.565 { 00:09:19.565 "dma_device_id": "system", 00:09:19.565 "dma_device_type": 1 00:09:19.565 }, 00:09:19.565 { 00:09:19.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.565 "dma_device_type": 2 00:09:19.565 } 00:09:19.565 ], 00:09:19.565 "driver_specific": {} 00:09:19.565 } 00:09:19.565 ] 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.565 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.822 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.822 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.822 "name": "Existed_Raid", 00:09:19.822 "uuid": "160ac279-07e0-47cd-9054-5e3479044293", 00:09:19.822 "strip_size_kb": 0, 00:09:19.822 "state": "configuring", 00:09:19.822 "raid_level": "raid1", 00:09:19.822 "superblock": true, 00:09:19.822 "num_base_bdevs": 4, 00:09:19.822 "num_base_bdevs_discovered": 2, 00:09:19.822 "num_base_bdevs_operational": 4, 00:09:19.822 "base_bdevs_list": [ 00:09:19.822 { 00:09:19.822 "name": "BaseBdev1", 00:09:19.822 "uuid": "c6246a62-0f0c-40af-a75c-2a226ba9ac6a", 00:09:19.822 "is_configured": true, 00:09:19.822 "data_offset": 2048, 00:09:19.822 "data_size": 63488 00:09:19.822 }, 00:09:19.822 { 00:09:19.822 "name": "BaseBdev2", 00:09:19.822 "uuid": "c2a1de77-dc3f-4be6-800d-55e53fc6ece1", 00:09:19.822 "is_configured": true, 00:09:19.822 "data_offset": 2048, 00:09:19.822 "data_size": 63488 00:09:19.822 }, 00:09:19.822 { 00:09:19.822 "name": "BaseBdev3", 00:09:19.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.822 "is_configured": false, 00:09:19.822 "data_offset": 0, 00:09:19.822 "data_size": 0 00:09:19.822 }, 00:09:19.822 { 00:09:19.822 "name": "BaseBdev4", 00:09:19.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.822 "is_configured": false, 00:09:19.822 "data_offset": 0, 00:09:19.822 "data_size": 0 00:09:19.822 } 00:09:19.822 ] 00:09:19.822 }' 00:09:19.822 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.822 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.146 [2024-10-01 14:34:11.599112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.146 BaseBdev3 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.146 [ 00:09:20.146 { 00:09:20.146 "name": "BaseBdev3", 00:09:20.146 "aliases": [ 00:09:20.146 "49bb5c3a-26b2-4cff-b36c-ec778bbd0d4c" 00:09:20.146 ], 00:09:20.146 "product_name": "Malloc disk", 00:09:20.146 "block_size": 512, 00:09:20.146 "num_blocks": 65536, 00:09:20.146 "uuid": "49bb5c3a-26b2-4cff-b36c-ec778bbd0d4c", 00:09:20.146 "assigned_rate_limits": { 00:09:20.146 "rw_ios_per_sec": 0, 00:09:20.146 "rw_mbytes_per_sec": 0, 00:09:20.146 "r_mbytes_per_sec": 0, 00:09:20.146 "w_mbytes_per_sec": 0 00:09:20.146 }, 00:09:20.146 "claimed": true, 00:09:20.146 "claim_type": "exclusive_write", 00:09:20.146 "zoned": false, 00:09:20.146 "supported_io_types": { 00:09:20.146 "read": true, 00:09:20.146 "write": true, 00:09:20.146 "unmap": true, 00:09:20.146 "flush": true, 00:09:20.146 "reset": true, 00:09:20.146 "nvme_admin": false, 00:09:20.146 "nvme_io": false, 00:09:20.146 "nvme_io_md": false, 00:09:20.146 "write_zeroes": true, 00:09:20.146 "zcopy": true, 00:09:20.146 "get_zone_info": false, 00:09:20.146 "zone_management": false, 00:09:20.146 "zone_append": false, 00:09:20.146 "compare": false, 00:09:20.146 "compare_and_write": false, 00:09:20.146 "abort": true, 00:09:20.146 "seek_hole": false, 00:09:20.146 "seek_data": false, 00:09:20.146 "copy": true, 00:09:20.146 "nvme_iov_md": false 00:09:20.146 }, 00:09:20.146 "memory_domains": [ 00:09:20.146 { 00:09:20.146 "dma_device_id": "system", 00:09:20.146 "dma_device_type": 1 00:09:20.146 }, 00:09:20.146 { 00:09:20.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.146 "dma_device_type": 2 00:09:20.146 } 00:09:20.146 ], 00:09:20.146 "driver_specific": {} 00:09:20.146 } 00:09:20.146 ] 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.146 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.147 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.147 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.147 "name": "Existed_Raid", 00:09:20.147 "uuid": "160ac279-07e0-47cd-9054-5e3479044293", 00:09:20.147 "strip_size_kb": 0, 00:09:20.147 "state": "configuring", 00:09:20.147 "raid_level": "raid1", 00:09:20.147 "superblock": true, 00:09:20.147 "num_base_bdevs": 4, 00:09:20.147 "num_base_bdevs_discovered": 3, 00:09:20.147 "num_base_bdevs_operational": 4, 00:09:20.147 "base_bdevs_list": [ 00:09:20.147 { 00:09:20.147 "name": "BaseBdev1", 00:09:20.147 "uuid": "c6246a62-0f0c-40af-a75c-2a226ba9ac6a", 00:09:20.147 "is_configured": true, 00:09:20.147 "data_offset": 2048, 00:09:20.147 "data_size": 63488 00:09:20.147 }, 00:09:20.147 { 00:09:20.147 "name": "BaseBdev2", 00:09:20.147 "uuid": "c2a1de77-dc3f-4be6-800d-55e53fc6ece1", 00:09:20.147 "is_configured": true, 00:09:20.147 "data_offset": 2048, 00:09:20.147 "data_size": 63488 00:09:20.147 }, 00:09:20.147 { 00:09:20.147 "name": "BaseBdev3", 00:09:20.147 "uuid": "49bb5c3a-26b2-4cff-b36c-ec778bbd0d4c", 00:09:20.147 "is_configured": true, 00:09:20.147 "data_offset": 2048, 00:09:20.147 "data_size": 63488 00:09:20.147 }, 00:09:20.147 { 00:09:20.147 "name": "BaseBdev4", 00:09:20.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.147 "is_configured": false, 00:09:20.147 "data_offset": 0, 00:09:20.147 "data_size": 0 00:09:20.147 } 00:09:20.147 ] 00:09:20.147 }' 00:09:20.147 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.147 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.405 [2024-10-01 14:34:11.968513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:20.405 [2024-10-01 14:34:11.968850] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:20.405 [2024-10-01 14:34:11.968868] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:20.405 BaseBdev4 00:09:20.405 [2024-10-01 14:34:11.969164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:20.405 [2024-10-01 14:34:11.969327] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:20.405 [2024-10-01 14:34:11.969339] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:20.405 [2024-10-01 14:34:11.969492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.405 [ 00:09:20.405 { 00:09:20.405 "name": "BaseBdev4", 00:09:20.405 "aliases": [ 00:09:20.405 "681828af-7764-4878-b0fd-1a2a93fdc1b0" 00:09:20.405 ], 00:09:20.405 "product_name": "Malloc disk", 00:09:20.405 "block_size": 512, 00:09:20.405 "num_blocks": 65536, 00:09:20.405 "uuid": "681828af-7764-4878-b0fd-1a2a93fdc1b0", 00:09:20.405 "assigned_rate_limits": { 00:09:20.405 "rw_ios_per_sec": 0, 00:09:20.405 "rw_mbytes_per_sec": 0, 00:09:20.405 "r_mbytes_per_sec": 0, 00:09:20.405 "w_mbytes_per_sec": 0 00:09:20.405 }, 00:09:20.405 "claimed": true, 00:09:20.405 "claim_type": "exclusive_write", 00:09:20.405 "zoned": false, 00:09:20.405 "supported_io_types": { 00:09:20.405 "read": true, 00:09:20.405 "write": true, 00:09:20.405 "unmap": true, 00:09:20.405 "flush": true, 00:09:20.405 "reset": true, 00:09:20.405 "nvme_admin": false, 00:09:20.405 "nvme_io": false, 00:09:20.405 "nvme_io_md": false, 00:09:20.405 "write_zeroes": true, 00:09:20.405 "zcopy": true, 00:09:20.405 "get_zone_info": false, 00:09:20.405 "zone_management": false, 00:09:20.405 "zone_append": false, 00:09:20.405 "compare": false, 00:09:20.405 "compare_and_write": false, 00:09:20.405 "abort": true, 00:09:20.405 "seek_hole": false, 00:09:20.405 "seek_data": false, 00:09:20.405 "copy": true, 00:09:20.405 "nvme_iov_md": false 00:09:20.405 }, 00:09:20.405 "memory_domains": [ 00:09:20.405 { 00:09:20.405 "dma_device_id": "system", 00:09:20.405 "dma_device_type": 1 00:09:20.405 }, 00:09:20.405 { 00:09:20.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.405 "dma_device_type": 2 00:09:20.405 } 00:09:20.405 ], 00:09:20.405 "driver_specific": {} 00:09:20.405 } 00:09:20.405 ] 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:20.405 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.406 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.406 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.406 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.406 14:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.406 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.406 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.406 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.406 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.406 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.406 "name": "Existed_Raid", 00:09:20.406 "uuid": "160ac279-07e0-47cd-9054-5e3479044293", 00:09:20.406 "strip_size_kb": 0, 00:09:20.406 "state": "online", 00:09:20.406 "raid_level": "raid1", 00:09:20.406 "superblock": true, 00:09:20.406 "num_base_bdevs": 4, 00:09:20.406 "num_base_bdevs_discovered": 4, 00:09:20.406 "num_base_bdevs_operational": 4, 00:09:20.406 "base_bdevs_list": [ 00:09:20.406 { 00:09:20.406 "name": "BaseBdev1", 00:09:20.406 "uuid": "c6246a62-0f0c-40af-a75c-2a226ba9ac6a", 00:09:20.406 "is_configured": true, 00:09:20.406 "data_offset": 2048, 00:09:20.406 "data_size": 63488 00:09:20.406 }, 00:09:20.406 { 00:09:20.406 "name": "BaseBdev2", 00:09:20.406 "uuid": "c2a1de77-dc3f-4be6-800d-55e53fc6ece1", 00:09:20.406 "is_configured": true, 00:09:20.406 "data_offset": 2048, 00:09:20.406 "data_size": 63488 00:09:20.406 }, 00:09:20.406 { 00:09:20.406 "name": "BaseBdev3", 00:09:20.406 "uuid": "49bb5c3a-26b2-4cff-b36c-ec778bbd0d4c", 00:09:20.406 "is_configured": true, 00:09:20.406 "data_offset": 2048, 00:09:20.406 "data_size": 63488 00:09:20.406 }, 00:09:20.406 { 00:09:20.406 "name": "BaseBdev4", 00:09:20.406 "uuid": "681828af-7764-4878-b0fd-1a2a93fdc1b0", 00:09:20.406 "is_configured": true, 00:09:20.406 "data_offset": 2048, 00:09:20.406 "data_size": 63488 00:09:20.406 } 00:09:20.406 ] 00:09:20.406 }' 00:09:20.406 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.406 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.972 [2024-10-01 14:34:12.361042] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.972 "name": "Existed_Raid", 00:09:20.972 "aliases": [ 00:09:20.972 "160ac279-07e0-47cd-9054-5e3479044293" 00:09:20.972 ], 00:09:20.972 "product_name": "Raid Volume", 00:09:20.972 "block_size": 512, 00:09:20.972 "num_blocks": 63488, 00:09:20.972 "uuid": "160ac279-07e0-47cd-9054-5e3479044293", 00:09:20.972 "assigned_rate_limits": { 00:09:20.972 "rw_ios_per_sec": 0, 00:09:20.972 "rw_mbytes_per_sec": 0, 00:09:20.972 "r_mbytes_per_sec": 0, 00:09:20.972 "w_mbytes_per_sec": 0 00:09:20.972 }, 00:09:20.972 "claimed": false, 00:09:20.972 "zoned": false, 00:09:20.972 "supported_io_types": { 00:09:20.972 "read": true, 00:09:20.972 "write": true, 00:09:20.972 "unmap": false, 00:09:20.972 "flush": false, 00:09:20.972 "reset": true, 00:09:20.972 "nvme_admin": false, 00:09:20.972 "nvme_io": false, 00:09:20.972 "nvme_io_md": false, 00:09:20.972 "write_zeroes": true, 00:09:20.972 "zcopy": false, 00:09:20.972 "get_zone_info": false, 00:09:20.972 "zone_management": false, 00:09:20.972 "zone_append": false, 00:09:20.972 "compare": false, 00:09:20.972 "compare_and_write": false, 00:09:20.972 "abort": false, 00:09:20.972 "seek_hole": false, 00:09:20.972 "seek_data": false, 00:09:20.972 "copy": false, 00:09:20.972 "nvme_iov_md": false 00:09:20.972 }, 00:09:20.972 "memory_domains": [ 00:09:20.972 { 00:09:20.972 "dma_device_id": "system", 00:09:20.972 "dma_device_type": 1 00:09:20.972 }, 00:09:20.972 { 00:09:20.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.972 "dma_device_type": 2 00:09:20.972 }, 00:09:20.972 { 00:09:20.972 "dma_device_id": "system", 00:09:20.972 "dma_device_type": 1 00:09:20.972 }, 00:09:20.972 { 00:09:20.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.972 "dma_device_type": 2 00:09:20.972 }, 00:09:20.972 { 00:09:20.972 "dma_device_id": "system", 00:09:20.972 "dma_device_type": 1 00:09:20.972 }, 00:09:20.972 { 00:09:20.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.972 "dma_device_type": 2 00:09:20.972 }, 00:09:20.972 { 00:09:20.972 "dma_device_id": "system", 00:09:20.972 "dma_device_type": 1 00:09:20.972 }, 00:09:20.972 { 00:09:20.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.972 "dma_device_type": 2 00:09:20.972 } 00:09:20.972 ], 00:09:20.972 "driver_specific": { 00:09:20.972 "raid": { 00:09:20.972 "uuid": "160ac279-07e0-47cd-9054-5e3479044293", 00:09:20.972 "strip_size_kb": 0, 00:09:20.972 "state": "online", 00:09:20.972 "raid_level": "raid1", 00:09:20.972 "superblock": true, 00:09:20.972 "num_base_bdevs": 4, 00:09:20.972 "num_base_bdevs_discovered": 4, 00:09:20.972 "num_base_bdevs_operational": 4, 00:09:20.972 "base_bdevs_list": [ 00:09:20.972 { 00:09:20.972 "name": "BaseBdev1", 00:09:20.972 "uuid": "c6246a62-0f0c-40af-a75c-2a226ba9ac6a", 00:09:20.972 "is_configured": true, 00:09:20.972 "data_offset": 2048, 00:09:20.972 "data_size": 63488 00:09:20.972 }, 00:09:20.972 { 00:09:20.972 "name": "BaseBdev2", 00:09:20.972 "uuid": "c2a1de77-dc3f-4be6-800d-55e53fc6ece1", 00:09:20.972 "is_configured": true, 00:09:20.972 "data_offset": 2048, 00:09:20.972 "data_size": 63488 00:09:20.972 }, 00:09:20.972 { 00:09:20.972 "name": "BaseBdev3", 00:09:20.972 "uuid": "49bb5c3a-26b2-4cff-b36c-ec778bbd0d4c", 00:09:20.972 "is_configured": true, 00:09:20.972 "data_offset": 2048, 00:09:20.972 "data_size": 63488 00:09:20.972 }, 00:09:20.972 { 00:09:20.972 "name": "BaseBdev4", 00:09:20.972 "uuid": "681828af-7764-4878-b0fd-1a2a93fdc1b0", 00:09:20.972 "is_configured": true, 00:09:20.972 "data_offset": 2048, 00:09:20.972 "data_size": 63488 00:09:20.972 } 00:09:20.972 ] 00:09:20.972 } 00:09:20.972 } 00:09:20.972 }' 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:20.972 BaseBdev2 00:09:20.972 BaseBdev3 00:09:20.972 BaseBdev4' 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.972 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.973 [2024-10-01 14:34:12.576810] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.973 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.231 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.231 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.231 "name": "Existed_Raid", 00:09:21.231 "uuid": "160ac279-07e0-47cd-9054-5e3479044293", 00:09:21.231 "strip_size_kb": 0, 00:09:21.231 "state": "online", 00:09:21.231 "raid_level": "raid1", 00:09:21.231 "superblock": true, 00:09:21.231 "num_base_bdevs": 4, 00:09:21.231 "num_base_bdevs_discovered": 3, 00:09:21.231 "num_base_bdevs_operational": 3, 00:09:21.231 "base_bdevs_list": [ 00:09:21.231 { 00:09:21.231 "name": null, 00:09:21.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.231 "is_configured": false, 00:09:21.231 "data_offset": 0, 00:09:21.231 "data_size": 63488 00:09:21.231 }, 00:09:21.231 { 00:09:21.231 "name": "BaseBdev2", 00:09:21.231 "uuid": "c2a1de77-dc3f-4be6-800d-55e53fc6ece1", 00:09:21.231 "is_configured": true, 00:09:21.231 "data_offset": 2048, 00:09:21.231 "data_size": 63488 00:09:21.231 }, 00:09:21.231 { 00:09:21.231 "name": "BaseBdev3", 00:09:21.231 "uuid": "49bb5c3a-26b2-4cff-b36c-ec778bbd0d4c", 00:09:21.231 "is_configured": true, 00:09:21.231 "data_offset": 2048, 00:09:21.231 "data_size": 63488 00:09:21.231 }, 00:09:21.231 { 00:09:21.231 "name": "BaseBdev4", 00:09:21.231 "uuid": "681828af-7764-4878-b0fd-1a2a93fdc1b0", 00:09:21.231 "is_configured": true, 00:09:21.231 "data_offset": 2048, 00:09:21.231 "data_size": 63488 00:09:21.231 } 00:09:21.231 ] 00:09:21.231 }' 00:09:21.231 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.231 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.490 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:21.490 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.490 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.490 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.490 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.490 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.490 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.490 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.490 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.490 14:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:21.490 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.490 14:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.490 [2024-10-01 14:34:12.992251] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.490 [2024-10-01 14:34:13.107307] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.490 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.749 [2024-10-01 14:34:13.205986] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:21.749 [2024-10-01 14:34:13.206274] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.749 [2024-10-01 14:34:13.270030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.749 [2024-10-01 14:34:13.270101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.749 [2024-10-01 14:34:13.270113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.749 BaseBdev2 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.749 [ 00:09:21.749 { 00:09:21.749 "name": "BaseBdev2", 00:09:21.749 "aliases": [ 00:09:21.749 "f1cb3af6-070b-4e9f-979b-304e21894f1b" 00:09:21.749 ], 00:09:21.749 "product_name": "Malloc disk", 00:09:21.749 "block_size": 512, 00:09:21.749 "num_blocks": 65536, 00:09:21.749 "uuid": "f1cb3af6-070b-4e9f-979b-304e21894f1b", 00:09:21.749 "assigned_rate_limits": { 00:09:21.749 "rw_ios_per_sec": 0, 00:09:21.749 "rw_mbytes_per_sec": 0, 00:09:21.749 "r_mbytes_per_sec": 0, 00:09:21.749 "w_mbytes_per_sec": 0 00:09:21.749 }, 00:09:21.749 "claimed": false, 00:09:21.749 "zoned": false, 00:09:21.749 "supported_io_types": { 00:09:21.749 "read": true, 00:09:21.749 "write": true, 00:09:21.749 "unmap": true, 00:09:21.749 "flush": true, 00:09:21.749 "reset": true, 00:09:21.749 "nvme_admin": false, 00:09:21.749 "nvme_io": false, 00:09:21.749 "nvme_io_md": false, 00:09:21.749 "write_zeroes": true, 00:09:21.749 "zcopy": true, 00:09:21.749 "get_zone_info": false, 00:09:21.749 "zone_management": false, 00:09:21.749 "zone_append": false, 00:09:21.749 "compare": false, 00:09:21.749 "compare_and_write": false, 00:09:21.749 "abort": true, 00:09:21.749 "seek_hole": false, 00:09:21.749 "seek_data": false, 00:09:21.749 "copy": true, 00:09:21.749 "nvme_iov_md": false 00:09:21.749 }, 00:09:21.749 "memory_domains": [ 00:09:21.749 { 00:09:21.749 "dma_device_id": "system", 00:09:21.749 "dma_device_type": 1 00:09:21.749 }, 00:09:21.749 { 00:09:21.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.749 "dma_device_type": 2 00:09:21.749 } 00:09:21.749 ], 00:09:21.749 "driver_specific": {} 00:09:21.749 } 00:09:21.749 ] 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.749 BaseBdev3 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.749 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.750 [ 00:09:21.750 { 00:09:21.750 "name": "BaseBdev3", 00:09:21.750 "aliases": [ 00:09:21.750 "8fd76b32-e160-46f1-9c53-7a264a45f991" 00:09:21.750 ], 00:09:21.750 "product_name": "Malloc disk", 00:09:21.750 "block_size": 512, 00:09:21.750 "num_blocks": 65536, 00:09:21.750 "uuid": "8fd76b32-e160-46f1-9c53-7a264a45f991", 00:09:21.750 "assigned_rate_limits": { 00:09:21.750 "rw_ios_per_sec": 0, 00:09:21.750 "rw_mbytes_per_sec": 0, 00:09:21.750 "r_mbytes_per_sec": 0, 00:09:21.750 "w_mbytes_per_sec": 0 00:09:21.750 }, 00:09:21.750 "claimed": false, 00:09:21.750 "zoned": false, 00:09:21.750 "supported_io_types": { 00:09:21.750 "read": true, 00:09:21.750 "write": true, 00:09:21.750 "unmap": true, 00:09:21.750 "flush": true, 00:09:21.750 "reset": true, 00:09:21.750 "nvme_admin": false, 00:09:21.750 "nvme_io": false, 00:09:21.750 "nvme_io_md": false, 00:09:21.750 "write_zeroes": true, 00:09:21.750 "zcopy": true, 00:09:21.750 "get_zone_info": false, 00:09:21.750 "zone_management": false, 00:09:21.750 "zone_append": false, 00:09:21.750 "compare": false, 00:09:21.750 "compare_and_write": false, 00:09:21.750 "abort": true, 00:09:21.750 "seek_hole": false, 00:09:21.750 "seek_data": false, 00:09:21.750 "copy": true, 00:09:21.750 "nvme_iov_md": false 00:09:21.750 }, 00:09:21.750 "memory_domains": [ 00:09:21.750 { 00:09:21.750 "dma_device_id": "system", 00:09:21.750 "dma_device_type": 1 00:09:21.750 }, 00:09:21.750 { 00:09:21.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.750 "dma_device_type": 2 00:09:21.750 } 00:09:21.750 ], 00:09:21.750 "driver_specific": {} 00:09:21.750 } 00:09:21.750 ] 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.750 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.010 BaseBdev4 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.010 [ 00:09:22.010 { 00:09:22.010 "name": "BaseBdev4", 00:09:22.010 "aliases": [ 00:09:22.010 "b323f651-abd3-45df-8844-92307ecc8fb4" 00:09:22.010 ], 00:09:22.010 "product_name": "Malloc disk", 00:09:22.010 "block_size": 512, 00:09:22.010 "num_blocks": 65536, 00:09:22.010 "uuid": "b323f651-abd3-45df-8844-92307ecc8fb4", 00:09:22.010 "assigned_rate_limits": { 00:09:22.010 "rw_ios_per_sec": 0, 00:09:22.010 "rw_mbytes_per_sec": 0, 00:09:22.010 "r_mbytes_per_sec": 0, 00:09:22.010 "w_mbytes_per_sec": 0 00:09:22.010 }, 00:09:22.010 "claimed": false, 00:09:22.010 "zoned": false, 00:09:22.010 "supported_io_types": { 00:09:22.010 "read": true, 00:09:22.010 "write": true, 00:09:22.010 "unmap": true, 00:09:22.010 "flush": true, 00:09:22.010 "reset": true, 00:09:22.010 "nvme_admin": false, 00:09:22.010 "nvme_io": false, 00:09:22.010 "nvme_io_md": false, 00:09:22.010 "write_zeroes": true, 00:09:22.010 "zcopy": true, 00:09:22.010 "get_zone_info": false, 00:09:22.010 "zone_management": false, 00:09:22.010 "zone_append": false, 00:09:22.010 "compare": false, 00:09:22.010 "compare_and_write": false, 00:09:22.010 "abort": true, 00:09:22.010 "seek_hole": false, 00:09:22.010 "seek_data": false, 00:09:22.010 "copy": true, 00:09:22.010 "nvme_iov_md": false 00:09:22.010 }, 00:09:22.010 "memory_domains": [ 00:09:22.010 { 00:09:22.010 "dma_device_id": "system", 00:09:22.010 "dma_device_type": 1 00:09:22.010 }, 00:09:22.010 { 00:09:22.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.010 "dma_device_type": 2 00:09:22.010 } 00:09:22.010 ], 00:09:22.010 "driver_specific": {} 00:09:22.010 } 00:09:22.010 ] 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.010 [2024-10-01 14:34:13.474862] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.010 [2024-10-01 14:34:13.475103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.010 [2024-10-01 14:34:13.475180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.010 [2024-10-01 14:34:13.477176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.010 [2024-10-01 14:34:13.477326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.010 "name": "Existed_Raid", 00:09:22.010 "uuid": "3fa72bc6-3c0b-4fc2-8e8e-9ca9a1a96746", 00:09:22.010 "strip_size_kb": 0, 00:09:22.010 "state": "configuring", 00:09:22.010 "raid_level": "raid1", 00:09:22.010 "superblock": true, 00:09:22.010 "num_base_bdevs": 4, 00:09:22.010 "num_base_bdevs_discovered": 3, 00:09:22.010 "num_base_bdevs_operational": 4, 00:09:22.010 "base_bdevs_list": [ 00:09:22.010 { 00:09:22.010 "name": "BaseBdev1", 00:09:22.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.010 "is_configured": false, 00:09:22.010 "data_offset": 0, 00:09:22.010 "data_size": 0 00:09:22.010 }, 00:09:22.010 { 00:09:22.010 "name": "BaseBdev2", 00:09:22.010 "uuid": "f1cb3af6-070b-4e9f-979b-304e21894f1b", 00:09:22.010 "is_configured": true, 00:09:22.010 "data_offset": 2048, 00:09:22.010 "data_size": 63488 00:09:22.010 }, 00:09:22.010 { 00:09:22.010 "name": "BaseBdev3", 00:09:22.010 "uuid": "8fd76b32-e160-46f1-9c53-7a264a45f991", 00:09:22.010 "is_configured": true, 00:09:22.010 "data_offset": 2048, 00:09:22.010 "data_size": 63488 00:09:22.010 }, 00:09:22.010 { 00:09:22.010 "name": "BaseBdev4", 00:09:22.010 "uuid": "b323f651-abd3-45df-8844-92307ecc8fb4", 00:09:22.010 "is_configured": true, 00:09:22.010 "data_offset": 2048, 00:09:22.010 "data_size": 63488 00:09:22.010 } 00:09:22.010 ] 00:09:22.010 }' 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.010 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.269 [2024-10-01 14:34:13.810915] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.269 "name": "Existed_Raid", 00:09:22.269 "uuid": "3fa72bc6-3c0b-4fc2-8e8e-9ca9a1a96746", 00:09:22.269 "strip_size_kb": 0, 00:09:22.269 "state": "configuring", 00:09:22.269 "raid_level": "raid1", 00:09:22.269 "superblock": true, 00:09:22.269 "num_base_bdevs": 4, 00:09:22.269 "num_base_bdevs_discovered": 2, 00:09:22.269 "num_base_bdevs_operational": 4, 00:09:22.269 "base_bdevs_list": [ 00:09:22.269 { 00:09:22.269 "name": "BaseBdev1", 00:09:22.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.269 "is_configured": false, 00:09:22.269 "data_offset": 0, 00:09:22.269 "data_size": 0 00:09:22.269 }, 00:09:22.269 { 00:09:22.269 "name": null, 00:09:22.269 "uuid": "f1cb3af6-070b-4e9f-979b-304e21894f1b", 00:09:22.269 "is_configured": false, 00:09:22.269 "data_offset": 0, 00:09:22.269 "data_size": 63488 00:09:22.269 }, 00:09:22.269 { 00:09:22.269 "name": "BaseBdev3", 00:09:22.269 "uuid": "8fd76b32-e160-46f1-9c53-7a264a45f991", 00:09:22.269 "is_configured": true, 00:09:22.269 "data_offset": 2048, 00:09:22.269 "data_size": 63488 00:09:22.269 }, 00:09:22.269 { 00:09:22.269 "name": "BaseBdev4", 00:09:22.269 "uuid": "b323f651-abd3-45df-8844-92307ecc8fb4", 00:09:22.269 "is_configured": true, 00:09:22.269 "data_offset": 2048, 00:09:22.269 "data_size": 63488 00:09:22.269 } 00:09:22.269 ] 00:09:22.269 }' 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.269 14:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.527 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.527 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:22.527 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.527 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.527 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.527 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:22.527 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.527 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.527 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.527 [2024-10-01 14:34:14.176037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.527 BaseBdev1 00:09:22.527 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.527 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.528 [ 00:09:22.528 { 00:09:22.528 "name": "BaseBdev1", 00:09:22.528 "aliases": [ 00:09:22.528 "a80294d8-2550-49f8-9d42-ccca8a8ee93e" 00:09:22.528 ], 00:09:22.528 "product_name": "Malloc disk", 00:09:22.528 "block_size": 512, 00:09:22.528 "num_blocks": 65536, 00:09:22.528 "uuid": "a80294d8-2550-49f8-9d42-ccca8a8ee93e", 00:09:22.528 "assigned_rate_limits": { 00:09:22.528 "rw_ios_per_sec": 0, 00:09:22.528 "rw_mbytes_per_sec": 0, 00:09:22.528 "r_mbytes_per_sec": 0, 00:09:22.528 "w_mbytes_per_sec": 0 00:09:22.528 }, 00:09:22.528 "claimed": true, 00:09:22.528 "claim_type": "exclusive_write", 00:09:22.528 "zoned": false, 00:09:22.528 "supported_io_types": { 00:09:22.528 "read": true, 00:09:22.528 "write": true, 00:09:22.528 "unmap": true, 00:09:22.528 "flush": true, 00:09:22.528 "reset": true, 00:09:22.528 "nvme_admin": false, 00:09:22.528 "nvme_io": false, 00:09:22.528 "nvme_io_md": false, 00:09:22.528 "write_zeroes": true, 00:09:22.528 "zcopy": true, 00:09:22.528 "get_zone_info": false, 00:09:22.528 "zone_management": false, 00:09:22.528 "zone_append": false, 00:09:22.528 "compare": false, 00:09:22.528 "compare_and_write": false, 00:09:22.528 "abort": true, 00:09:22.528 "seek_hole": false, 00:09:22.528 "seek_data": false, 00:09:22.528 "copy": true, 00:09:22.528 "nvme_iov_md": false 00:09:22.528 }, 00:09:22.528 "memory_domains": [ 00:09:22.528 { 00:09:22.528 "dma_device_id": "system", 00:09:22.528 "dma_device_type": 1 00:09:22.528 }, 00:09:22.528 { 00:09:22.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.528 "dma_device_type": 2 00:09:22.528 } 00:09:22.528 ], 00:09:22.528 "driver_specific": {} 00:09:22.528 } 00:09:22.528 ] 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.528 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.786 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.786 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.786 "name": "Existed_Raid", 00:09:22.786 "uuid": "3fa72bc6-3c0b-4fc2-8e8e-9ca9a1a96746", 00:09:22.786 "strip_size_kb": 0, 00:09:22.786 "state": "configuring", 00:09:22.786 "raid_level": "raid1", 00:09:22.786 "superblock": true, 00:09:22.786 "num_base_bdevs": 4, 00:09:22.786 "num_base_bdevs_discovered": 3, 00:09:22.786 "num_base_bdevs_operational": 4, 00:09:22.786 "base_bdevs_list": [ 00:09:22.786 { 00:09:22.786 "name": "BaseBdev1", 00:09:22.786 "uuid": "a80294d8-2550-49f8-9d42-ccca8a8ee93e", 00:09:22.786 "is_configured": true, 00:09:22.786 "data_offset": 2048, 00:09:22.786 "data_size": 63488 00:09:22.786 }, 00:09:22.786 { 00:09:22.786 "name": null, 00:09:22.786 "uuid": "f1cb3af6-070b-4e9f-979b-304e21894f1b", 00:09:22.786 "is_configured": false, 00:09:22.786 "data_offset": 0, 00:09:22.786 "data_size": 63488 00:09:22.786 }, 00:09:22.786 { 00:09:22.786 "name": "BaseBdev3", 00:09:22.786 "uuid": "8fd76b32-e160-46f1-9c53-7a264a45f991", 00:09:22.786 "is_configured": true, 00:09:22.786 "data_offset": 2048, 00:09:22.786 "data_size": 63488 00:09:22.786 }, 00:09:22.786 { 00:09:22.786 "name": "BaseBdev4", 00:09:22.786 "uuid": "b323f651-abd3-45df-8844-92307ecc8fb4", 00:09:22.786 "is_configured": true, 00:09:22.786 "data_offset": 2048, 00:09:22.786 "data_size": 63488 00:09:22.786 } 00:09:22.786 ] 00:09:22.786 }' 00:09:22.786 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.786 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.043 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.043 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.043 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.043 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:23.043 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.043 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:23.043 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:23.043 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.043 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.044 [2024-10-01 14:34:14.544212] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.044 "name": "Existed_Raid", 00:09:23.044 "uuid": "3fa72bc6-3c0b-4fc2-8e8e-9ca9a1a96746", 00:09:23.044 "strip_size_kb": 0, 00:09:23.044 "state": "configuring", 00:09:23.044 "raid_level": "raid1", 00:09:23.044 "superblock": true, 00:09:23.044 "num_base_bdevs": 4, 00:09:23.044 "num_base_bdevs_discovered": 2, 00:09:23.044 "num_base_bdevs_operational": 4, 00:09:23.044 "base_bdevs_list": [ 00:09:23.044 { 00:09:23.044 "name": "BaseBdev1", 00:09:23.044 "uuid": "a80294d8-2550-49f8-9d42-ccca8a8ee93e", 00:09:23.044 "is_configured": true, 00:09:23.044 "data_offset": 2048, 00:09:23.044 "data_size": 63488 00:09:23.044 }, 00:09:23.044 { 00:09:23.044 "name": null, 00:09:23.044 "uuid": "f1cb3af6-070b-4e9f-979b-304e21894f1b", 00:09:23.044 "is_configured": false, 00:09:23.044 "data_offset": 0, 00:09:23.044 "data_size": 63488 00:09:23.044 }, 00:09:23.044 { 00:09:23.044 "name": null, 00:09:23.044 "uuid": "8fd76b32-e160-46f1-9c53-7a264a45f991", 00:09:23.044 "is_configured": false, 00:09:23.044 "data_offset": 0, 00:09:23.044 "data_size": 63488 00:09:23.044 }, 00:09:23.044 { 00:09:23.044 "name": "BaseBdev4", 00:09:23.044 "uuid": "b323f651-abd3-45df-8844-92307ecc8fb4", 00:09:23.044 "is_configured": true, 00:09:23.044 "data_offset": 2048, 00:09:23.044 "data_size": 63488 00:09:23.044 } 00:09:23.044 ] 00:09:23.044 }' 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.044 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.301 [2024-10-01 14:34:14.924293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.301 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.302 "name": "Existed_Raid", 00:09:23.302 "uuid": "3fa72bc6-3c0b-4fc2-8e8e-9ca9a1a96746", 00:09:23.302 "strip_size_kb": 0, 00:09:23.302 "state": "configuring", 00:09:23.302 "raid_level": "raid1", 00:09:23.302 "superblock": true, 00:09:23.302 "num_base_bdevs": 4, 00:09:23.302 "num_base_bdevs_discovered": 3, 00:09:23.302 "num_base_bdevs_operational": 4, 00:09:23.302 "base_bdevs_list": [ 00:09:23.302 { 00:09:23.302 "name": "BaseBdev1", 00:09:23.302 "uuid": "a80294d8-2550-49f8-9d42-ccca8a8ee93e", 00:09:23.302 "is_configured": true, 00:09:23.302 "data_offset": 2048, 00:09:23.302 "data_size": 63488 00:09:23.302 }, 00:09:23.302 { 00:09:23.302 "name": null, 00:09:23.302 "uuid": "f1cb3af6-070b-4e9f-979b-304e21894f1b", 00:09:23.302 "is_configured": false, 00:09:23.302 "data_offset": 0, 00:09:23.302 "data_size": 63488 00:09:23.302 }, 00:09:23.302 { 00:09:23.302 "name": "BaseBdev3", 00:09:23.302 "uuid": "8fd76b32-e160-46f1-9c53-7a264a45f991", 00:09:23.302 "is_configured": true, 00:09:23.302 "data_offset": 2048, 00:09:23.302 "data_size": 63488 00:09:23.302 }, 00:09:23.302 { 00:09:23.302 "name": "BaseBdev4", 00:09:23.302 "uuid": "b323f651-abd3-45df-8844-92307ecc8fb4", 00:09:23.302 "is_configured": true, 00:09:23.302 "data_offset": 2048, 00:09:23.302 "data_size": 63488 00:09:23.302 } 00:09:23.302 ] 00:09:23.302 }' 00:09:23.302 14:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.302 14:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.866 [2024-10-01 14:34:15.284386] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.866 "name": "Existed_Raid", 00:09:23.866 "uuid": "3fa72bc6-3c0b-4fc2-8e8e-9ca9a1a96746", 00:09:23.866 "strip_size_kb": 0, 00:09:23.866 "state": "configuring", 00:09:23.866 "raid_level": "raid1", 00:09:23.866 "superblock": true, 00:09:23.866 "num_base_bdevs": 4, 00:09:23.866 "num_base_bdevs_discovered": 2, 00:09:23.866 "num_base_bdevs_operational": 4, 00:09:23.866 "base_bdevs_list": [ 00:09:23.866 { 00:09:23.866 "name": null, 00:09:23.866 "uuid": "a80294d8-2550-49f8-9d42-ccca8a8ee93e", 00:09:23.866 "is_configured": false, 00:09:23.866 "data_offset": 0, 00:09:23.866 "data_size": 63488 00:09:23.866 }, 00:09:23.866 { 00:09:23.866 "name": null, 00:09:23.866 "uuid": "f1cb3af6-070b-4e9f-979b-304e21894f1b", 00:09:23.866 "is_configured": false, 00:09:23.866 "data_offset": 0, 00:09:23.866 "data_size": 63488 00:09:23.866 }, 00:09:23.866 { 00:09:23.866 "name": "BaseBdev3", 00:09:23.866 "uuid": "8fd76b32-e160-46f1-9c53-7a264a45f991", 00:09:23.866 "is_configured": true, 00:09:23.866 "data_offset": 2048, 00:09:23.866 "data_size": 63488 00:09:23.866 }, 00:09:23.866 { 00:09:23.866 "name": "BaseBdev4", 00:09:23.866 "uuid": "b323f651-abd3-45df-8844-92307ecc8fb4", 00:09:23.866 "is_configured": true, 00:09:23.866 "data_offset": 2048, 00:09:23.866 "data_size": 63488 00:09:23.866 } 00:09:23.866 ] 00:09:23.866 }' 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.866 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.123 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.123 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:24.123 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.123 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.123 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.123 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.124 [2024-10-01 14:34:15.698894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.124 "name": "Existed_Raid", 00:09:24.124 "uuid": "3fa72bc6-3c0b-4fc2-8e8e-9ca9a1a96746", 00:09:24.124 "strip_size_kb": 0, 00:09:24.124 "state": "configuring", 00:09:24.124 "raid_level": "raid1", 00:09:24.124 "superblock": true, 00:09:24.124 "num_base_bdevs": 4, 00:09:24.124 "num_base_bdevs_discovered": 3, 00:09:24.124 "num_base_bdevs_operational": 4, 00:09:24.124 "base_bdevs_list": [ 00:09:24.124 { 00:09:24.124 "name": null, 00:09:24.124 "uuid": "a80294d8-2550-49f8-9d42-ccca8a8ee93e", 00:09:24.124 "is_configured": false, 00:09:24.124 "data_offset": 0, 00:09:24.124 "data_size": 63488 00:09:24.124 }, 00:09:24.124 { 00:09:24.124 "name": "BaseBdev2", 00:09:24.124 "uuid": "f1cb3af6-070b-4e9f-979b-304e21894f1b", 00:09:24.124 "is_configured": true, 00:09:24.124 "data_offset": 2048, 00:09:24.124 "data_size": 63488 00:09:24.124 }, 00:09:24.124 { 00:09:24.124 "name": "BaseBdev3", 00:09:24.124 "uuid": "8fd76b32-e160-46f1-9c53-7a264a45f991", 00:09:24.124 "is_configured": true, 00:09:24.124 "data_offset": 2048, 00:09:24.124 "data_size": 63488 00:09:24.124 }, 00:09:24.124 { 00:09:24.124 "name": "BaseBdev4", 00:09:24.124 "uuid": "b323f651-abd3-45df-8844-92307ecc8fb4", 00:09:24.124 "is_configured": true, 00:09:24.124 "data_offset": 2048, 00:09:24.124 "data_size": 63488 00:09:24.124 } 00:09:24.124 ] 00:09:24.124 }' 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.124 14:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.381 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.381 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.381 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.381 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:24.381 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.381 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:24.381 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.381 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.381 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.381 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a80294d8-2550-49f8-9d42-ccca8a8ee93e 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.639 [2024-10-01 14:34:16.119905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:24.639 [2024-10-01 14:34:16.120142] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:24.639 [2024-10-01 14:34:16.120156] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:24.639 NewBaseBdev 00:09:24.639 [2024-10-01 14:34:16.120392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:24.639 [2024-10-01 14:34:16.120512] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:24.639 [2024-10-01 14:34:16.120525] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:24.639 [2024-10-01 14:34:16.120633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.639 [ 00:09:24.639 { 00:09:24.639 "name": "NewBaseBdev", 00:09:24.639 "aliases": [ 00:09:24.639 "a80294d8-2550-49f8-9d42-ccca8a8ee93e" 00:09:24.639 ], 00:09:24.639 "product_name": "Malloc disk", 00:09:24.639 "block_size": 512, 00:09:24.639 "num_blocks": 65536, 00:09:24.639 "uuid": "a80294d8-2550-49f8-9d42-ccca8a8ee93e", 00:09:24.639 "assigned_rate_limits": { 00:09:24.639 "rw_ios_per_sec": 0, 00:09:24.639 "rw_mbytes_per_sec": 0, 00:09:24.639 "r_mbytes_per_sec": 0, 00:09:24.639 "w_mbytes_per_sec": 0 00:09:24.639 }, 00:09:24.639 "claimed": true, 00:09:24.639 "claim_type": "exclusive_write", 00:09:24.639 "zoned": false, 00:09:24.639 "supported_io_types": { 00:09:24.639 "read": true, 00:09:24.639 "write": true, 00:09:24.639 "unmap": true, 00:09:24.639 "flush": true, 00:09:24.639 "reset": true, 00:09:24.639 "nvme_admin": false, 00:09:24.639 "nvme_io": false, 00:09:24.639 "nvme_io_md": false, 00:09:24.639 "write_zeroes": true, 00:09:24.639 "zcopy": true, 00:09:24.639 "get_zone_info": false, 00:09:24.639 "zone_management": false, 00:09:24.639 "zone_append": false, 00:09:24.639 "compare": false, 00:09:24.639 "compare_and_write": false, 00:09:24.639 "abort": true, 00:09:24.639 "seek_hole": false, 00:09:24.639 "seek_data": false, 00:09:24.639 "copy": true, 00:09:24.639 "nvme_iov_md": false 00:09:24.639 }, 00:09:24.639 "memory_domains": [ 00:09:24.639 { 00:09:24.639 "dma_device_id": "system", 00:09:24.639 "dma_device_type": 1 00:09:24.639 }, 00:09:24.639 { 00:09:24.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.639 "dma_device_type": 2 00:09:24.639 } 00:09:24.639 ], 00:09:24.639 "driver_specific": {} 00:09:24.639 } 00:09:24.639 ] 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.639 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.639 "name": "Existed_Raid", 00:09:24.639 "uuid": "3fa72bc6-3c0b-4fc2-8e8e-9ca9a1a96746", 00:09:24.639 "strip_size_kb": 0, 00:09:24.639 "state": "online", 00:09:24.639 "raid_level": "raid1", 00:09:24.639 "superblock": true, 00:09:24.639 "num_base_bdevs": 4, 00:09:24.639 "num_base_bdevs_discovered": 4, 00:09:24.639 "num_base_bdevs_operational": 4, 00:09:24.639 "base_bdevs_list": [ 00:09:24.639 { 00:09:24.639 "name": "NewBaseBdev", 00:09:24.639 "uuid": "a80294d8-2550-49f8-9d42-ccca8a8ee93e", 00:09:24.639 "is_configured": true, 00:09:24.639 "data_offset": 2048, 00:09:24.639 "data_size": 63488 00:09:24.639 }, 00:09:24.639 { 00:09:24.639 "name": "BaseBdev2", 00:09:24.639 "uuid": "f1cb3af6-070b-4e9f-979b-304e21894f1b", 00:09:24.639 "is_configured": true, 00:09:24.639 "data_offset": 2048, 00:09:24.639 "data_size": 63488 00:09:24.639 }, 00:09:24.639 { 00:09:24.639 "name": "BaseBdev3", 00:09:24.640 "uuid": "8fd76b32-e160-46f1-9c53-7a264a45f991", 00:09:24.640 "is_configured": true, 00:09:24.640 "data_offset": 2048, 00:09:24.640 "data_size": 63488 00:09:24.640 }, 00:09:24.640 { 00:09:24.640 "name": "BaseBdev4", 00:09:24.640 "uuid": "b323f651-abd3-45df-8844-92307ecc8fb4", 00:09:24.640 "is_configured": true, 00:09:24.640 "data_offset": 2048, 00:09:24.640 "data_size": 63488 00:09:24.640 } 00:09:24.640 ] 00:09:24.640 }' 00:09:24.640 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.640 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.897 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:24.897 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:24.897 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.897 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.897 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.897 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.897 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:24.897 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.897 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.897 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.897 [2024-10-01 14:34:16.464346] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.897 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.897 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.897 "name": "Existed_Raid", 00:09:24.897 "aliases": [ 00:09:24.897 "3fa72bc6-3c0b-4fc2-8e8e-9ca9a1a96746" 00:09:24.897 ], 00:09:24.897 "product_name": "Raid Volume", 00:09:24.897 "block_size": 512, 00:09:24.897 "num_blocks": 63488, 00:09:24.897 "uuid": "3fa72bc6-3c0b-4fc2-8e8e-9ca9a1a96746", 00:09:24.897 "assigned_rate_limits": { 00:09:24.897 "rw_ios_per_sec": 0, 00:09:24.897 "rw_mbytes_per_sec": 0, 00:09:24.897 "r_mbytes_per_sec": 0, 00:09:24.898 "w_mbytes_per_sec": 0 00:09:24.898 }, 00:09:24.898 "claimed": false, 00:09:24.898 "zoned": false, 00:09:24.898 "supported_io_types": { 00:09:24.898 "read": true, 00:09:24.898 "write": true, 00:09:24.898 "unmap": false, 00:09:24.898 "flush": false, 00:09:24.898 "reset": true, 00:09:24.898 "nvme_admin": false, 00:09:24.898 "nvme_io": false, 00:09:24.898 "nvme_io_md": false, 00:09:24.898 "write_zeroes": true, 00:09:24.898 "zcopy": false, 00:09:24.898 "get_zone_info": false, 00:09:24.898 "zone_management": false, 00:09:24.898 "zone_append": false, 00:09:24.898 "compare": false, 00:09:24.898 "compare_and_write": false, 00:09:24.898 "abort": false, 00:09:24.898 "seek_hole": false, 00:09:24.898 "seek_data": false, 00:09:24.898 "copy": false, 00:09:24.898 "nvme_iov_md": false 00:09:24.898 }, 00:09:24.898 "memory_domains": [ 00:09:24.898 { 00:09:24.898 "dma_device_id": "system", 00:09:24.898 "dma_device_type": 1 00:09:24.898 }, 00:09:24.898 { 00:09:24.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.898 "dma_device_type": 2 00:09:24.898 }, 00:09:24.898 { 00:09:24.898 "dma_device_id": "system", 00:09:24.898 "dma_device_type": 1 00:09:24.898 }, 00:09:24.898 { 00:09:24.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.898 "dma_device_type": 2 00:09:24.898 }, 00:09:24.898 { 00:09:24.898 "dma_device_id": "system", 00:09:24.898 "dma_device_type": 1 00:09:24.898 }, 00:09:24.898 { 00:09:24.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.898 "dma_device_type": 2 00:09:24.898 }, 00:09:24.898 { 00:09:24.898 "dma_device_id": "system", 00:09:24.898 "dma_device_type": 1 00:09:24.898 }, 00:09:24.898 { 00:09:24.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.898 "dma_device_type": 2 00:09:24.898 } 00:09:24.898 ], 00:09:24.898 "driver_specific": { 00:09:24.898 "raid": { 00:09:24.898 "uuid": "3fa72bc6-3c0b-4fc2-8e8e-9ca9a1a96746", 00:09:24.898 "strip_size_kb": 0, 00:09:24.898 "state": "online", 00:09:24.898 "raid_level": "raid1", 00:09:24.898 "superblock": true, 00:09:24.898 "num_base_bdevs": 4, 00:09:24.898 "num_base_bdevs_discovered": 4, 00:09:24.898 "num_base_bdevs_operational": 4, 00:09:24.898 "base_bdevs_list": [ 00:09:24.898 { 00:09:24.898 "name": "NewBaseBdev", 00:09:24.898 "uuid": "a80294d8-2550-49f8-9d42-ccca8a8ee93e", 00:09:24.898 "is_configured": true, 00:09:24.898 "data_offset": 2048, 00:09:24.898 "data_size": 63488 00:09:24.898 }, 00:09:24.898 { 00:09:24.898 "name": "BaseBdev2", 00:09:24.898 "uuid": "f1cb3af6-070b-4e9f-979b-304e21894f1b", 00:09:24.898 "is_configured": true, 00:09:24.898 "data_offset": 2048, 00:09:24.898 "data_size": 63488 00:09:24.898 }, 00:09:24.898 { 00:09:24.898 "name": "BaseBdev3", 00:09:24.898 "uuid": "8fd76b32-e160-46f1-9c53-7a264a45f991", 00:09:24.898 "is_configured": true, 00:09:24.898 "data_offset": 2048, 00:09:24.898 "data_size": 63488 00:09:24.898 }, 00:09:24.898 { 00:09:24.898 "name": "BaseBdev4", 00:09:24.898 "uuid": "b323f651-abd3-45df-8844-92307ecc8fb4", 00:09:24.898 "is_configured": true, 00:09:24.898 "data_offset": 2048, 00:09:24.898 "data_size": 63488 00:09:24.898 } 00:09:24.898 ] 00:09:24.898 } 00:09:24.898 } 00:09:24.898 }' 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:24.898 BaseBdev2 00:09:24.898 BaseBdev3 00:09:24.898 BaseBdev4' 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.898 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.156 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.157 [2024-10-01 14:34:16.704075] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.157 [2024-10-01 14:34:16.704115] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.157 [2024-10-01 14:34:16.704226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.157 [2024-10-01 14:34:16.704504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.157 [2024-10-01 14:34:16.704523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:25.157 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.157 14:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72118 00:09:25.157 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72118 ']' 00:09:25.157 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72118 00:09:25.157 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:25.157 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.157 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72118 00:09:25.157 killing process with pid 72118 00:09:25.157 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:25.157 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:25.157 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72118' 00:09:25.157 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72118 00:09:25.157 [2024-10-01 14:34:16.732721] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.157 14:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72118 00:09:25.415 [2024-10-01 14:34:16.945474] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.347 ************************************ 00:09:26.347 END TEST raid_state_function_test_sb 00:09:26.347 14:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:26.347 00:09:26.347 real 0m8.513s 00:09:26.347 user 0m13.483s 00:09:26.347 sys 0m1.499s 00:09:26.347 14:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.347 14:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.347 ************************************ 00:09:26.347 14:34:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:09:26.347 14:34:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:26.347 14:34:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.347 14:34:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.347 ************************************ 00:09:26.347 START TEST raid_superblock_test 00:09:26.347 ************************************ 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:26.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72761 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72761 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72761 ']' 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.347 14:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.347 [2024-10-01 14:34:17.786967] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:09:26.347 [2024-10-01 14:34:17.787317] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72761 ] 00:09:26.347 [2024-10-01 14:34:17.934193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.604 [2024-10-01 14:34:18.127812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.604 [2024-10-01 14:34:18.249925] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.604 [2024-10-01 14:34:18.250131] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.168 malloc1 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.168 [2024-10-01 14:34:18.621410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:27.168 [2024-10-01 14:34:18.621484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.168 [2024-10-01 14:34:18.621501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:27.168 [2024-10-01 14:34:18.621513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.168 [2024-10-01 14:34:18.623489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.168 [2024-10-01 14:34:18.623530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:27.168 pt1 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.168 malloc2 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.168 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.168 [2024-10-01 14:34:18.667406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:27.168 [2024-10-01 14:34:18.667477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.168 [2024-10-01 14:34:18.667496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:27.169 [2024-10-01 14:34:18.667504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.169 [2024-10-01 14:34:18.669443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.169 [2024-10-01 14:34:18.669477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:27.169 pt2 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.169 malloc3 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.169 [2024-10-01 14:34:18.701478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:27.169 [2024-10-01 14:34:18.701535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.169 [2024-10-01 14:34:18.701552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:27.169 [2024-10-01 14:34:18.701560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.169 [2024-10-01 14:34:18.703425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.169 [2024-10-01 14:34:18.703636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:27.169 pt3 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.169 malloc4 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.169 [2024-10-01 14:34:18.739328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:27.169 [2024-10-01 14:34:18.739390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.169 [2024-10-01 14:34:18.739409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:27.169 [2024-10-01 14:34:18.739417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.169 [2024-10-01 14:34:18.741328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.169 [2024-10-01 14:34:18.741359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:27.169 pt4 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.169 [2024-10-01 14:34:18.747378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:27.169 [2024-10-01 14:34:18.749003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:27.169 [2024-10-01 14:34:18.749057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:27.169 [2024-10-01 14:34:18.749094] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:27.169 [2024-10-01 14:34:18.749254] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:27.169 [2024-10-01 14:34:18.749262] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:27.169 [2024-10-01 14:34:18.749512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:27.169 [2024-10-01 14:34:18.749639] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:27.169 [2024-10-01 14:34:18.749648] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:27.169 [2024-10-01 14:34:18.749779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.169 "name": "raid_bdev1", 00:09:27.169 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:27.169 "strip_size_kb": 0, 00:09:27.169 "state": "online", 00:09:27.169 "raid_level": "raid1", 00:09:27.169 "superblock": true, 00:09:27.169 "num_base_bdevs": 4, 00:09:27.169 "num_base_bdevs_discovered": 4, 00:09:27.169 "num_base_bdevs_operational": 4, 00:09:27.169 "base_bdevs_list": [ 00:09:27.169 { 00:09:27.169 "name": "pt1", 00:09:27.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.169 "is_configured": true, 00:09:27.169 "data_offset": 2048, 00:09:27.169 "data_size": 63488 00:09:27.169 }, 00:09:27.169 { 00:09:27.169 "name": "pt2", 00:09:27.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.169 "is_configured": true, 00:09:27.169 "data_offset": 2048, 00:09:27.169 "data_size": 63488 00:09:27.169 }, 00:09:27.169 { 00:09:27.169 "name": "pt3", 00:09:27.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.169 "is_configured": true, 00:09:27.169 "data_offset": 2048, 00:09:27.169 "data_size": 63488 00:09:27.169 }, 00:09:27.169 { 00:09:27.169 "name": "pt4", 00:09:27.169 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:27.169 "is_configured": true, 00:09:27.169 "data_offset": 2048, 00:09:27.169 "data_size": 63488 00:09:27.169 } 00:09:27.169 ] 00:09:27.169 }' 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.169 14:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.425 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:27.425 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:27.425 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.425 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.425 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.425 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.425 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:27.425 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.425 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.425 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.425 [2024-10-01 14:34:19.043743] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.425 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.425 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.425 "name": "raid_bdev1", 00:09:27.425 "aliases": [ 00:09:27.425 "24f93cac-553b-4600-86e0-c64eac1c3a9e" 00:09:27.425 ], 00:09:27.425 "product_name": "Raid Volume", 00:09:27.425 "block_size": 512, 00:09:27.425 "num_blocks": 63488, 00:09:27.425 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:27.425 "assigned_rate_limits": { 00:09:27.425 "rw_ios_per_sec": 0, 00:09:27.425 "rw_mbytes_per_sec": 0, 00:09:27.425 "r_mbytes_per_sec": 0, 00:09:27.425 "w_mbytes_per_sec": 0 00:09:27.425 }, 00:09:27.425 "claimed": false, 00:09:27.425 "zoned": false, 00:09:27.425 "supported_io_types": { 00:09:27.425 "read": true, 00:09:27.425 "write": true, 00:09:27.425 "unmap": false, 00:09:27.425 "flush": false, 00:09:27.425 "reset": true, 00:09:27.425 "nvme_admin": false, 00:09:27.425 "nvme_io": false, 00:09:27.425 "nvme_io_md": false, 00:09:27.425 "write_zeroes": true, 00:09:27.425 "zcopy": false, 00:09:27.425 "get_zone_info": false, 00:09:27.425 "zone_management": false, 00:09:27.425 "zone_append": false, 00:09:27.425 "compare": false, 00:09:27.425 "compare_and_write": false, 00:09:27.425 "abort": false, 00:09:27.425 "seek_hole": false, 00:09:27.425 "seek_data": false, 00:09:27.425 "copy": false, 00:09:27.425 "nvme_iov_md": false 00:09:27.425 }, 00:09:27.425 "memory_domains": [ 00:09:27.425 { 00:09:27.425 "dma_device_id": "system", 00:09:27.425 "dma_device_type": 1 00:09:27.425 }, 00:09:27.425 { 00:09:27.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.425 "dma_device_type": 2 00:09:27.425 }, 00:09:27.425 { 00:09:27.425 "dma_device_id": "system", 00:09:27.425 "dma_device_type": 1 00:09:27.425 }, 00:09:27.425 { 00:09:27.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.425 "dma_device_type": 2 00:09:27.425 }, 00:09:27.425 { 00:09:27.425 "dma_device_id": "system", 00:09:27.425 "dma_device_type": 1 00:09:27.425 }, 00:09:27.425 { 00:09:27.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.425 "dma_device_type": 2 00:09:27.425 }, 00:09:27.425 { 00:09:27.425 "dma_device_id": "system", 00:09:27.425 "dma_device_type": 1 00:09:27.425 }, 00:09:27.425 { 00:09:27.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.425 "dma_device_type": 2 00:09:27.425 } 00:09:27.425 ], 00:09:27.425 "driver_specific": { 00:09:27.425 "raid": { 00:09:27.425 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:27.425 "strip_size_kb": 0, 00:09:27.425 "state": "online", 00:09:27.425 "raid_level": "raid1", 00:09:27.425 "superblock": true, 00:09:27.425 "num_base_bdevs": 4, 00:09:27.425 "num_base_bdevs_discovered": 4, 00:09:27.425 "num_base_bdevs_operational": 4, 00:09:27.425 "base_bdevs_list": [ 00:09:27.425 { 00:09:27.425 "name": "pt1", 00:09:27.425 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.425 "is_configured": true, 00:09:27.425 "data_offset": 2048, 00:09:27.425 "data_size": 63488 00:09:27.425 }, 00:09:27.425 { 00:09:27.425 "name": "pt2", 00:09:27.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.426 "is_configured": true, 00:09:27.426 "data_offset": 2048, 00:09:27.426 "data_size": 63488 00:09:27.426 }, 00:09:27.426 { 00:09:27.426 "name": "pt3", 00:09:27.426 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.426 "is_configured": true, 00:09:27.426 "data_offset": 2048, 00:09:27.426 "data_size": 63488 00:09:27.426 }, 00:09:27.426 { 00:09:27.426 "name": "pt4", 00:09:27.426 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:27.426 "is_configured": true, 00:09:27.426 "data_offset": 2048, 00:09:27.426 "data_size": 63488 00:09:27.426 } 00:09:27.426 ] 00:09:27.426 } 00:09:27.426 } 00:09:27.426 }' 00:09:27.426 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.426 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:27.426 pt2 00:09:27.426 pt3 00:09:27.426 pt4' 00:09:27.426 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:27.683 [2024-10-01 14:34:19.251751] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=24f93cac-553b-4600-86e0-c64eac1c3a9e 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 24f93cac-553b-4600-86e0-c64eac1c3a9e ']' 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.683 [2024-10-01 14:34:19.283464] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.683 [2024-10-01 14:34:19.283500] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.683 [2024-10-01 14:34:19.283580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.683 [2024-10-01 14:34:19.283667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.683 [2024-10-01 14:34:19.283680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.683 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.942 [2024-10-01 14:34:19.395491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:27.942 [2024-10-01 14:34:19.397377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:27.942 [2024-10-01 14:34:19.397433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:27.942 [2024-10-01 14:34:19.397462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:27.942 [2024-10-01 14:34:19.397510] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:27.942 [2024-10-01 14:34:19.397557] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:27.942 [2024-10-01 14:34:19.397574] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:27.942 [2024-10-01 14:34:19.397589] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:27.942 [2024-10-01 14:34:19.397600] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.942 [2024-10-01 14:34:19.397610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:27.942 request: 00:09:27.942 { 00:09:27.942 "name": "raid_bdev1", 00:09:27.942 "raid_level": "raid1", 00:09:27.942 "base_bdevs": [ 00:09:27.942 "malloc1", 00:09:27.942 "malloc2", 00:09:27.942 "malloc3", 00:09:27.942 "malloc4" 00:09:27.942 ], 00:09:27.942 "superblock": false, 00:09:27.942 "method": "bdev_raid_create", 00:09:27.942 "req_id": 1 00:09:27.942 } 00:09:27.942 Got JSON-RPC error response 00:09:27.942 response: 00:09:27.942 { 00:09:27.942 "code": -17, 00:09:27.942 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:27.942 } 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.942 [2024-10-01 14:34:19.435485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:27.942 [2024-10-01 14:34:19.435558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.942 [2024-10-01 14:34:19.435575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:27.942 [2024-10-01 14:34:19.435585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.942 [2024-10-01 14:34:19.437586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.942 [2024-10-01 14:34:19.437626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:27.942 [2024-10-01 14:34:19.437702] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:27.942 [2024-10-01 14:34:19.437763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:27.942 pt1 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.942 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.943 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.943 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.943 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.943 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.943 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.943 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.943 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.943 "name": "raid_bdev1", 00:09:27.943 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:27.943 "strip_size_kb": 0, 00:09:27.943 "state": "configuring", 00:09:27.943 "raid_level": "raid1", 00:09:27.943 "superblock": true, 00:09:27.943 "num_base_bdevs": 4, 00:09:27.943 "num_base_bdevs_discovered": 1, 00:09:27.943 "num_base_bdevs_operational": 4, 00:09:27.943 "base_bdevs_list": [ 00:09:27.943 { 00:09:27.943 "name": "pt1", 00:09:27.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.943 "is_configured": true, 00:09:27.943 "data_offset": 2048, 00:09:27.943 "data_size": 63488 00:09:27.943 }, 00:09:27.943 { 00:09:27.943 "name": null, 00:09:27.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.943 "is_configured": false, 00:09:27.943 "data_offset": 2048, 00:09:27.943 "data_size": 63488 00:09:27.943 }, 00:09:27.943 { 00:09:27.943 "name": null, 00:09:27.943 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.943 "is_configured": false, 00:09:27.943 "data_offset": 2048, 00:09:27.943 "data_size": 63488 00:09:27.943 }, 00:09:27.943 { 00:09:27.943 "name": null, 00:09:27.943 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:27.943 "is_configured": false, 00:09:27.943 "data_offset": 2048, 00:09:27.943 "data_size": 63488 00:09:27.943 } 00:09:27.943 ] 00:09:27.943 }' 00:09:27.943 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.943 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.200 [2024-10-01 14:34:19.731535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:28.200 [2024-10-01 14:34:19.731613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.200 [2024-10-01 14:34:19.731630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:28.200 [2024-10-01 14:34:19.731640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.200 [2024-10-01 14:34:19.732078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.200 [2024-10-01 14:34:19.732105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:28.200 [2024-10-01 14:34:19.732176] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:28.200 [2024-10-01 14:34:19.732201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:28.200 pt2 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.200 [2024-10-01 14:34:19.739571] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.200 "name": "raid_bdev1", 00:09:28.200 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:28.200 "strip_size_kb": 0, 00:09:28.200 "state": "configuring", 00:09:28.200 "raid_level": "raid1", 00:09:28.200 "superblock": true, 00:09:28.200 "num_base_bdevs": 4, 00:09:28.200 "num_base_bdevs_discovered": 1, 00:09:28.200 "num_base_bdevs_operational": 4, 00:09:28.200 "base_bdevs_list": [ 00:09:28.200 { 00:09:28.200 "name": "pt1", 00:09:28.200 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.200 "is_configured": true, 00:09:28.200 "data_offset": 2048, 00:09:28.200 "data_size": 63488 00:09:28.200 }, 00:09:28.200 { 00:09:28.200 "name": null, 00:09:28.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.200 "is_configured": false, 00:09:28.200 "data_offset": 0, 00:09:28.200 "data_size": 63488 00:09:28.200 }, 00:09:28.200 { 00:09:28.200 "name": null, 00:09:28.200 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.200 "is_configured": false, 00:09:28.200 "data_offset": 2048, 00:09:28.200 "data_size": 63488 00:09:28.200 }, 00:09:28.200 { 00:09:28.200 "name": null, 00:09:28.200 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:28.200 "is_configured": false, 00:09:28.200 "data_offset": 2048, 00:09:28.200 "data_size": 63488 00:09:28.200 } 00:09:28.200 ] 00:09:28.200 }' 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.200 14:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.458 [2024-10-01 14:34:20.035617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:28.458 [2024-10-01 14:34:20.035692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.458 [2024-10-01 14:34:20.035727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:28.458 [2024-10-01 14:34:20.035736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.458 [2024-10-01 14:34:20.036144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.458 [2024-10-01 14:34:20.036328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:28.458 [2024-10-01 14:34:20.036421] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:28.458 [2024-10-01 14:34:20.036447] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:28.458 pt2 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.458 [2024-10-01 14:34:20.043610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:28.458 [2024-10-01 14:34:20.043666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.458 [2024-10-01 14:34:20.043685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:28.458 [2024-10-01 14:34:20.043693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.458 [2024-10-01 14:34:20.044074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.458 [2024-10-01 14:34:20.044098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:28.458 [2024-10-01 14:34:20.044168] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:28.458 [2024-10-01 14:34:20.044185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:28.458 pt3 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.458 [2024-10-01 14:34:20.051574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:28.458 [2024-10-01 14:34:20.051624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.458 [2024-10-01 14:34:20.051642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:28.458 [2024-10-01 14:34:20.051650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.458 [2024-10-01 14:34:20.052035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.458 [2024-10-01 14:34:20.052057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:28.458 [2024-10-01 14:34:20.052122] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:28.458 [2024-10-01 14:34:20.052144] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:28.458 [2024-10-01 14:34:20.052270] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:28.458 [2024-10-01 14:34:20.052277] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:28.458 [2024-10-01 14:34:20.052486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:28.458 [2024-10-01 14:34:20.052608] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:28.458 [2024-10-01 14:34:20.052617] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:28.458 [2024-10-01 14:34:20.052742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.458 pt4 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.458 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.458 "name": "raid_bdev1", 00:09:28.458 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:28.459 "strip_size_kb": 0, 00:09:28.459 "state": "online", 00:09:28.459 "raid_level": "raid1", 00:09:28.459 "superblock": true, 00:09:28.459 "num_base_bdevs": 4, 00:09:28.459 "num_base_bdevs_discovered": 4, 00:09:28.459 "num_base_bdevs_operational": 4, 00:09:28.459 "base_bdevs_list": [ 00:09:28.459 { 00:09:28.459 "name": "pt1", 00:09:28.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.459 "is_configured": true, 00:09:28.459 "data_offset": 2048, 00:09:28.459 "data_size": 63488 00:09:28.459 }, 00:09:28.459 { 00:09:28.459 "name": "pt2", 00:09:28.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.459 "is_configured": true, 00:09:28.459 "data_offset": 2048, 00:09:28.459 "data_size": 63488 00:09:28.459 }, 00:09:28.459 { 00:09:28.459 "name": "pt3", 00:09:28.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.459 "is_configured": true, 00:09:28.459 "data_offset": 2048, 00:09:28.459 "data_size": 63488 00:09:28.459 }, 00:09:28.459 { 00:09:28.459 "name": "pt4", 00:09:28.459 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:28.459 "is_configured": true, 00:09:28.459 "data_offset": 2048, 00:09:28.459 "data_size": 63488 00:09:28.459 } 00:09:28.459 ] 00:09:28.459 }' 00:09:28.459 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.459 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.717 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:28.717 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:28.717 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:28.717 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:28.717 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:28.717 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:28.717 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:28.717 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:28.717 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.717 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.717 [2024-10-01 14:34:20.355983] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.717 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.717 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:28.717 "name": "raid_bdev1", 00:09:28.717 "aliases": [ 00:09:28.717 "24f93cac-553b-4600-86e0-c64eac1c3a9e" 00:09:28.717 ], 00:09:28.717 "product_name": "Raid Volume", 00:09:28.717 "block_size": 512, 00:09:28.717 "num_blocks": 63488, 00:09:28.717 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:28.717 "assigned_rate_limits": { 00:09:28.717 "rw_ios_per_sec": 0, 00:09:28.717 "rw_mbytes_per_sec": 0, 00:09:28.717 "r_mbytes_per_sec": 0, 00:09:28.717 "w_mbytes_per_sec": 0 00:09:28.717 }, 00:09:28.717 "claimed": false, 00:09:28.717 "zoned": false, 00:09:28.717 "supported_io_types": { 00:09:28.717 "read": true, 00:09:28.717 "write": true, 00:09:28.717 "unmap": false, 00:09:28.717 "flush": false, 00:09:28.717 "reset": true, 00:09:28.717 "nvme_admin": false, 00:09:28.717 "nvme_io": false, 00:09:28.717 "nvme_io_md": false, 00:09:28.717 "write_zeroes": true, 00:09:28.717 "zcopy": false, 00:09:28.717 "get_zone_info": false, 00:09:28.717 "zone_management": false, 00:09:28.717 "zone_append": false, 00:09:28.717 "compare": false, 00:09:28.717 "compare_and_write": false, 00:09:28.717 "abort": false, 00:09:28.717 "seek_hole": false, 00:09:28.717 "seek_data": false, 00:09:28.717 "copy": false, 00:09:28.717 "nvme_iov_md": false 00:09:28.717 }, 00:09:28.717 "memory_domains": [ 00:09:28.717 { 00:09:28.717 "dma_device_id": "system", 00:09:28.717 "dma_device_type": 1 00:09:28.717 }, 00:09:28.717 { 00:09:28.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.717 "dma_device_type": 2 00:09:28.717 }, 00:09:28.717 { 00:09:28.717 "dma_device_id": "system", 00:09:28.717 "dma_device_type": 1 00:09:28.717 }, 00:09:28.717 { 00:09:28.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.717 "dma_device_type": 2 00:09:28.717 }, 00:09:28.717 { 00:09:28.717 "dma_device_id": "system", 00:09:28.717 "dma_device_type": 1 00:09:28.717 }, 00:09:28.717 { 00:09:28.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.717 "dma_device_type": 2 00:09:28.717 }, 00:09:28.717 { 00:09:28.717 "dma_device_id": "system", 00:09:28.717 "dma_device_type": 1 00:09:28.717 }, 00:09:28.717 { 00:09:28.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.717 "dma_device_type": 2 00:09:28.717 } 00:09:28.717 ], 00:09:28.717 "driver_specific": { 00:09:28.717 "raid": { 00:09:28.717 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:28.717 "strip_size_kb": 0, 00:09:28.717 "state": "online", 00:09:28.717 "raid_level": "raid1", 00:09:28.717 "superblock": true, 00:09:28.717 "num_base_bdevs": 4, 00:09:28.717 "num_base_bdevs_discovered": 4, 00:09:28.717 "num_base_bdevs_operational": 4, 00:09:28.717 "base_bdevs_list": [ 00:09:28.717 { 00:09:28.717 "name": "pt1", 00:09:28.717 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.717 "is_configured": true, 00:09:28.717 "data_offset": 2048, 00:09:28.717 "data_size": 63488 00:09:28.717 }, 00:09:28.717 { 00:09:28.717 "name": "pt2", 00:09:28.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.717 "is_configured": true, 00:09:28.717 "data_offset": 2048, 00:09:28.717 "data_size": 63488 00:09:28.717 }, 00:09:28.717 { 00:09:28.717 "name": "pt3", 00:09:28.717 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.717 "is_configured": true, 00:09:28.717 "data_offset": 2048, 00:09:28.717 "data_size": 63488 00:09:28.717 }, 00:09:28.717 { 00:09:28.717 "name": "pt4", 00:09:28.717 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:28.717 "is_configured": true, 00:09:28.717 "data_offset": 2048, 00:09:28.717 "data_size": 63488 00:09:28.717 } 00:09:28.717 ] 00:09:28.717 } 00:09:28.717 } 00:09:28.717 }' 00:09:28.717 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:28.975 pt2 00:09:28.975 pt3 00:09:28.975 pt4' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.975 [2024-10-01 14:34:20.563995] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 24f93cac-553b-4600-86e0-c64eac1c3a9e '!=' 24f93cac-553b-4600-86e0-c64eac1c3a9e ']' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.975 [2024-10-01 14:34:20.595765] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.975 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.975 "name": "raid_bdev1", 00:09:28.975 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:28.975 "strip_size_kb": 0, 00:09:28.975 "state": "online", 00:09:28.975 "raid_level": "raid1", 00:09:28.975 "superblock": true, 00:09:28.975 "num_base_bdevs": 4, 00:09:28.975 "num_base_bdevs_discovered": 3, 00:09:28.975 "num_base_bdevs_operational": 3, 00:09:28.975 "base_bdevs_list": [ 00:09:28.975 { 00:09:28.975 "name": null, 00:09:28.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.975 "is_configured": false, 00:09:28.975 "data_offset": 0, 00:09:28.975 "data_size": 63488 00:09:28.975 }, 00:09:28.975 { 00:09:28.975 "name": "pt2", 00:09:28.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.975 "is_configured": true, 00:09:28.976 "data_offset": 2048, 00:09:28.976 "data_size": 63488 00:09:28.976 }, 00:09:28.976 { 00:09:28.976 "name": "pt3", 00:09:28.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.976 "is_configured": true, 00:09:28.976 "data_offset": 2048, 00:09:28.976 "data_size": 63488 00:09:28.976 }, 00:09:28.976 { 00:09:28.976 "name": "pt4", 00:09:28.976 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:28.976 "is_configured": true, 00:09:28.976 "data_offset": 2048, 00:09:28.976 "data_size": 63488 00:09:28.976 } 00:09:28.976 ] 00:09:28.976 }' 00:09:28.976 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.976 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.233 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:29.233 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.233 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.233 [2024-10-01 14:34:20.903771] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.233 [2024-10-01 14:34:20.903813] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.233 [2024-10-01 14:34:20.903883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.233 [2024-10-01 14:34:20.903955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.233 [2024-10-01 14:34:20.903964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:29.233 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.233 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:29.233 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.233 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.233 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.491 [2024-10-01 14:34:20.971770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:29.491 [2024-10-01 14:34:20.971832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.491 [2024-10-01 14:34:20.971849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:29.491 [2024-10-01 14:34:20.971857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.491 [2024-10-01 14:34:20.973879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.491 [2024-10-01 14:34:20.974070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:29.491 [2024-10-01 14:34:20.974162] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:29.491 [2024-10-01 14:34:20.974203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:29.491 pt2 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.491 14:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.491 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.491 "name": "raid_bdev1", 00:09:29.491 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:29.491 "strip_size_kb": 0, 00:09:29.491 "state": "configuring", 00:09:29.491 "raid_level": "raid1", 00:09:29.491 "superblock": true, 00:09:29.491 "num_base_bdevs": 4, 00:09:29.491 "num_base_bdevs_discovered": 1, 00:09:29.491 "num_base_bdevs_operational": 3, 00:09:29.491 "base_bdevs_list": [ 00:09:29.491 { 00:09:29.491 "name": null, 00:09:29.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.491 "is_configured": false, 00:09:29.491 "data_offset": 2048, 00:09:29.491 "data_size": 63488 00:09:29.491 }, 00:09:29.491 { 00:09:29.491 "name": "pt2", 00:09:29.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.491 "is_configured": true, 00:09:29.491 "data_offset": 2048, 00:09:29.491 "data_size": 63488 00:09:29.491 }, 00:09:29.491 { 00:09:29.491 "name": null, 00:09:29.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.491 "is_configured": false, 00:09:29.491 "data_offset": 2048, 00:09:29.491 "data_size": 63488 00:09:29.491 }, 00:09:29.491 { 00:09:29.491 "name": null, 00:09:29.491 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:29.491 "is_configured": false, 00:09:29.491 "data_offset": 2048, 00:09:29.491 "data_size": 63488 00:09:29.491 } 00:09:29.491 ] 00:09:29.491 }' 00:09:29.491 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.492 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.749 [2024-10-01 14:34:21.303863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:29.749 [2024-10-01 14:34:21.303941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.749 [2024-10-01 14:34:21.303960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:09:29.749 [2024-10-01 14:34:21.303968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.749 [2024-10-01 14:34:21.304380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.749 [2024-10-01 14:34:21.304400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:29.749 [2024-10-01 14:34:21.304474] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:29.749 [2024-10-01 14:34:21.304497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:29.749 pt3 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.749 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.750 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.750 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.750 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.750 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.750 "name": "raid_bdev1", 00:09:29.750 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:29.750 "strip_size_kb": 0, 00:09:29.750 "state": "configuring", 00:09:29.750 "raid_level": "raid1", 00:09:29.750 "superblock": true, 00:09:29.750 "num_base_bdevs": 4, 00:09:29.750 "num_base_bdevs_discovered": 2, 00:09:29.750 "num_base_bdevs_operational": 3, 00:09:29.750 "base_bdevs_list": [ 00:09:29.750 { 00:09:29.750 "name": null, 00:09:29.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.750 "is_configured": false, 00:09:29.750 "data_offset": 2048, 00:09:29.750 "data_size": 63488 00:09:29.750 }, 00:09:29.750 { 00:09:29.750 "name": "pt2", 00:09:29.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.750 "is_configured": true, 00:09:29.750 "data_offset": 2048, 00:09:29.750 "data_size": 63488 00:09:29.750 }, 00:09:29.750 { 00:09:29.750 "name": "pt3", 00:09:29.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.750 "is_configured": true, 00:09:29.750 "data_offset": 2048, 00:09:29.750 "data_size": 63488 00:09:29.750 }, 00:09:29.750 { 00:09:29.750 "name": null, 00:09:29.750 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:29.750 "is_configured": false, 00:09:29.750 "data_offset": 2048, 00:09:29.750 "data_size": 63488 00:09:29.750 } 00:09:29.750 ] 00:09:29.750 }' 00:09:29.750 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.750 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.007 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:30.007 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:30.007 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:09:30.007 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:30.007 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.007 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.007 [2024-10-01 14:34:21.619916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:30.007 [2024-10-01 14:34:21.619983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.007 [2024-10-01 14:34:21.620003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:09:30.007 [2024-10-01 14:34:21.620011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.007 [2024-10-01 14:34:21.620434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.007 [2024-10-01 14:34:21.620457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:30.007 [2024-10-01 14:34:21.620531] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:30.007 [2024-10-01 14:34:21.620554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:30.007 [2024-10-01 14:34:21.620670] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:30.007 [2024-10-01 14:34:21.620678] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:30.007 [2024-10-01 14:34:21.620902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:30.007 [2024-10-01 14:34:21.621172] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:30.007 [2024-10-01 14:34:21.621187] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:30.007 [2024-10-01 14:34:21.621304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.007 pt4 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.008 "name": "raid_bdev1", 00:09:30.008 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:30.008 "strip_size_kb": 0, 00:09:30.008 "state": "online", 00:09:30.008 "raid_level": "raid1", 00:09:30.008 "superblock": true, 00:09:30.008 "num_base_bdevs": 4, 00:09:30.008 "num_base_bdevs_discovered": 3, 00:09:30.008 "num_base_bdevs_operational": 3, 00:09:30.008 "base_bdevs_list": [ 00:09:30.008 { 00:09:30.008 "name": null, 00:09:30.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.008 "is_configured": false, 00:09:30.008 "data_offset": 2048, 00:09:30.008 "data_size": 63488 00:09:30.008 }, 00:09:30.008 { 00:09:30.008 "name": "pt2", 00:09:30.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.008 "is_configured": true, 00:09:30.008 "data_offset": 2048, 00:09:30.008 "data_size": 63488 00:09:30.008 }, 00:09:30.008 { 00:09:30.008 "name": "pt3", 00:09:30.008 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.008 "is_configured": true, 00:09:30.008 "data_offset": 2048, 00:09:30.008 "data_size": 63488 00:09:30.008 }, 00:09:30.008 { 00:09:30.008 "name": "pt4", 00:09:30.008 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:30.008 "is_configured": true, 00:09:30.008 "data_offset": 2048, 00:09:30.008 "data_size": 63488 00:09:30.008 } 00:09:30.008 ] 00:09:30.008 }' 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.008 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.265 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.265 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.265 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.265 [2024-10-01 14:34:21.939957] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.265 [2024-10-01 14:34:21.940159] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.265 [2024-10-01 14:34:21.940281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.265 [2024-10-01 14:34:21.940416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.265 [2024-10-01 14:34:21.940434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:30.265 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.265 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.265 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.265 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.265 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.523 [2024-10-01 14:34:21.991985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:30.523 [2024-10-01 14:34:21.992181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.523 [2024-10-01 14:34:21.992240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:09:30.523 [2024-10-01 14:34:21.992281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.523 [2024-10-01 14:34:21.994349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.523 [2024-10-01 14:34:21.994386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:30.523 [2024-10-01 14:34:21.994469] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:30.523 [2024-10-01 14:34:21.994510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:30.523 [2024-10-01 14:34:21.994615] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:30.523 [2024-10-01 14:34:21.994627] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.523 [2024-10-01 14:34:21.994643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:30.523 [2024-10-01 14:34:21.994691] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:30.523 [2024-10-01 14:34:21.994794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:30.523 pt1 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.523 14:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.523 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.524 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.524 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.524 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.524 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.524 "name": "raid_bdev1", 00:09:30.524 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:30.524 "strip_size_kb": 0, 00:09:30.524 "state": "configuring", 00:09:30.524 "raid_level": "raid1", 00:09:30.524 "superblock": true, 00:09:30.524 "num_base_bdevs": 4, 00:09:30.524 "num_base_bdevs_discovered": 2, 00:09:30.524 "num_base_bdevs_operational": 3, 00:09:30.524 "base_bdevs_list": [ 00:09:30.524 { 00:09:30.524 "name": null, 00:09:30.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.524 "is_configured": false, 00:09:30.524 "data_offset": 2048, 00:09:30.524 "data_size": 63488 00:09:30.524 }, 00:09:30.524 { 00:09:30.524 "name": "pt2", 00:09:30.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.524 "is_configured": true, 00:09:30.524 "data_offset": 2048, 00:09:30.524 "data_size": 63488 00:09:30.524 }, 00:09:30.524 { 00:09:30.524 "name": "pt3", 00:09:30.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.524 "is_configured": true, 00:09:30.524 "data_offset": 2048, 00:09:30.524 "data_size": 63488 00:09:30.524 }, 00:09:30.524 { 00:09:30.524 "name": null, 00:09:30.524 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:30.524 "is_configured": false, 00:09:30.524 "data_offset": 2048, 00:09:30.524 "data_size": 63488 00:09:30.524 } 00:09:30.524 ] 00:09:30.524 }' 00:09:30.524 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.524 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.781 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:30.781 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:30.781 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.781 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.781 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.781 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:30.781 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:30.781 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.781 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.781 [2024-10-01 14:34:22.348085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:30.781 [2024-10-01 14:34:22.348159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.781 [2024-10-01 14:34:22.348179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:09:30.781 [2024-10-01 14:34:22.348188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.781 [2024-10-01 14:34:22.348588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.781 [2024-10-01 14:34:22.348607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:30.781 [2024-10-01 14:34:22.348681] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:30.781 [2024-10-01 14:34:22.348699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:30.781 [2024-10-01 14:34:22.348825] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:30.781 [2024-10-01 14:34:22.348838] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:30.781 [2024-10-01 14:34:22.349054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:09:30.781 [2024-10-01 14:34:22.349164] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:30.781 [2024-10-01 14:34:22.349179] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:30.782 [2024-10-01 14:34:22.349291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.782 pt4 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.782 "name": "raid_bdev1", 00:09:30.782 "uuid": "24f93cac-553b-4600-86e0-c64eac1c3a9e", 00:09:30.782 "strip_size_kb": 0, 00:09:30.782 "state": "online", 00:09:30.782 "raid_level": "raid1", 00:09:30.782 "superblock": true, 00:09:30.782 "num_base_bdevs": 4, 00:09:30.782 "num_base_bdevs_discovered": 3, 00:09:30.782 "num_base_bdevs_operational": 3, 00:09:30.782 "base_bdevs_list": [ 00:09:30.782 { 00:09:30.782 "name": null, 00:09:30.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.782 "is_configured": false, 00:09:30.782 "data_offset": 2048, 00:09:30.782 "data_size": 63488 00:09:30.782 }, 00:09:30.782 { 00:09:30.782 "name": "pt2", 00:09:30.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.782 "is_configured": true, 00:09:30.782 "data_offset": 2048, 00:09:30.782 "data_size": 63488 00:09:30.782 }, 00:09:30.782 { 00:09:30.782 "name": "pt3", 00:09:30.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.782 "is_configured": true, 00:09:30.782 "data_offset": 2048, 00:09:30.782 "data_size": 63488 00:09:30.782 }, 00:09:30.782 { 00:09:30.782 "name": "pt4", 00:09:30.782 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:30.782 "is_configured": true, 00:09:30.782 "data_offset": 2048, 00:09:30.782 "data_size": 63488 00:09:30.782 } 00:09:30.782 ] 00:09:30.782 }' 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.782 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.039 [2024-10-01 14:34:22.668392] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 24f93cac-553b-4600-86e0-c64eac1c3a9e '!=' 24f93cac-553b-4600-86e0-c64eac1c3a9e ']' 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72761 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72761 ']' 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72761 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72761 00:09:31.039 killing process with pid 72761 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72761' 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72761 00:09:31.039 [2024-10-01 14:34:22.708792] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.039 14:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72761 00:09:31.039 [2024-10-01 14:34:22.708887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.039 [2024-10-01 14:34:22.708961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.039 [2024-10-01 14:34:22.708983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:31.296 [2024-10-01 14:34:22.915549] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.230 14:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:32.230 00:09:32.230 real 0m6.001s 00:09:32.230 user 0m9.278s 00:09:32.230 sys 0m1.078s 00:09:32.230 ************************************ 00:09:32.230 END TEST raid_superblock_test 00:09:32.230 ************************************ 00:09:32.230 14:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.230 14:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.230 14:34:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:09:32.230 14:34:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:32.230 14:34:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.230 14:34:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.230 ************************************ 00:09:32.230 START TEST raid_read_error_test 00:09:32.230 ************************************ 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:32.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Hl0XQavxv9 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73226 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73226 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73226 ']' 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.230 14:34:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:32.230 [2024-10-01 14:34:23.837215] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:09:32.230 [2024-10-01 14:34:23.837348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73226 ] 00:09:32.487 [2024-10-01 14:34:23.985407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.745 [2024-10-01 14:34:24.179305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.745 [2024-10-01 14:34:24.316495] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.745 [2024-10-01 14:34:24.316550] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.002 BaseBdev1_malloc 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.002 true 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.002 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.002 [2024-10-01 14:34:24.684588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:33.002 [2024-10-01 14:34:24.684651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.002 [2024-10-01 14:34:24.684677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:33.002 [2024-10-01 14:34:24.684693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.260 [2024-10-01 14:34:24.687066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.260 [2024-10-01 14:34:24.687112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:33.260 BaseBdev1 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.260 BaseBdev2_malloc 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.260 true 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.260 [2024-10-01 14:34:24.747080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:33.260 [2024-10-01 14:34:24.747389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.260 [2024-10-01 14:34:24.747416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:33.260 [2024-10-01 14:34:24.747428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.260 [2024-10-01 14:34:24.749727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.260 [2024-10-01 14:34:24.749760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:33.260 BaseBdev2 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.260 BaseBdev3_malloc 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.260 true 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.260 [2024-10-01 14:34:24.792088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:33.260 [2024-10-01 14:34:24.792144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.260 [2024-10-01 14:34:24.792163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:33.260 [2024-10-01 14:34:24.792173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.260 [2024-10-01 14:34:24.794381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.260 [2024-10-01 14:34:24.794418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:33.260 BaseBdev3 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.260 BaseBdev4_malloc 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.260 true 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.260 [2024-10-01 14:34:24.837301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:33.260 [2024-10-01 14:34:24.837359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.260 [2024-10-01 14:34:24.837380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:33.260 [2024-10-01 14:34:24.837394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.260 [2024-10-01 14:34:24.840142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.260 [2024-10-01 14:34:24.840196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:33.260 BaseBdev4 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.260 [2024-10-01 14:34:24.845407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.260 [2024-10-01 14:34:24.847385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.260 [2024-10-01 14:34:24.847470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.260 [2024-10-01 14:34:24.847541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:33.260 [2024-10-01 14:34:24.847800] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:33.260 [2024-10-01 14:34:24.847821] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:33.260 [2024-10-01 14:34:24.848121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:33.260 [2024-10-01 14:34:24.848288] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:33.260 [2024-10-01 14:34:24.848303] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:33.260 [2024-10-01 14:34:24.848471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.260 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.260 "name": "raid_bdev1", 00:09:33.260 "uuid": "60b2fe30-6db9-4152-a67a-0e64b48973da", 00:09:33.260 "strip_size_kb": 0, 00:09:33.260 "state": "online", 00:09:33.260 "raid_level": "raid1", 00:09:33.260 "superblock": true, 00:09:33.260 "num_base_bdevs": 4, 00:09:33.260 "num_base_bdevs_discovered": 4, 00:09:33.260 "num_base_bdevs_operational": 4, 00:09:33.260 "base_bdevs_list": [ 00:09:33.260 { 00:09:33.260 "name": "BaseBdev1", 00:09:33.260 "uuid": "7e78f79e-1581-5453-9bb1-639b4d2859fa", 00:09:33.260 "is_configured": true, 00:09:33.260 "data_offset": 2048, 00:09:33.260 "data_size": 63488 00:09:33.260 }, 00:09:33.260 { 00:09:33.260 "name": "BaseBdev2", 00:09:33.260 "uuid": "c0575d46-6e86-5fda-92c9-27c744b5a440", 00:09:33.260 "is_configured": true, 00:09:33.260 "data_offset": 2048, 00:09:33.260 "data_size": 63488 00:09:33.260 }, 00:09:33.261 { 00:09:33.261 "name": "BaseBdev3", 00:09:33.261 "uuid": "2f669d21-f9c9-58b0-8aee-90e39b5fc54b", 00:09:33.261 "is_configured": true, 00:09:33.261 "data_offset": 2048, 00:09:33.261 "data_size": 63488 00:09:33.261 }, 00:09:33.261 { 00:09:33.261 "name": "BaseBdev4", 00:09:33.261 "uuid": "0788a51e-f9e2-52a3-99c3-90b86e1c58e3", 00:09:33.261 "is_configured": true, 00:09:33.261 "data_offset": 2048, 00:09:33.261 "data_size": 63488 00:09:33.261 } 00:09:33.261 ] 00:09:33.261 }' 00:09:33.261 14:34:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.261 14:34:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.518 14:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:33.518 14:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:33.776 [2024-10-01 14:34:25.294447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.710 "name": "raid_bdev1", 00:09:34.710 "uuid": "60b2fe30-6db9-4152-a67a-0e64b48973da", 00:09:34.710 "strip_size_kb": 0, 00:09:34.710 "state": "online", 00:09:34.710 "raid_level": "raid1", 00:09:34.710 "superblock": true, 00:09:34.710 "num_base_bdevs": 4, 00:09:34.710 "num_base_bdevs_discovered": 4, 00:09:34.710 "num_base_bdevs_operational": 4, 00:09:34.710 "base_bdevs_list": [ 00:09:34.710 { 00:09:34.710 "name": "BaseBdev1", 00:09:34.710 "uuid": "7e78f79e-1581-5453-9bb1-639b4d2859fa", 00:09:34.710 "is_configured": true, 00:09:34.710 "data_offset": 2048, 00:09:34.710 "data_size": 63488 00:09:34.710 }, 00:09:34.710 { 00:09:34.710 "name": "BaseBdev2", 00:09:34.710 "uuid": "c0575d46-6e86-5fda-92c9-27c744b5a440", 00:09:34.710 "is_configured": true, 00:09:34.710 "data_offset": 2048, 00:09:34.710 "data_size": 63488 00:09:34.710 }, 00:09:34.710 { 00:09:34.710 "name": "BaseBdev3", 00:09:34.710 "uuid": "2f669d21-f9c9-58b0-8aee-90e39b5fc54b", 00:09:34.710 "is_configured": true, 00:09:34.710 "data_offset": 2048, 00:09:34.710 "data_size": 63488 00:09:34.710 }, 00:09:34.710 { 00:09:34.710 "name": "BaseBdev4", 00:09:34.710 "uuid": "0788a51e-f9e2-52a3-99c3-90b86e1c58e3", 00:09:34.710 "is_configured": true, 00:09:34.710 "data_offset": 2048, 00:09:34.710 "data_size": 63488 00:09:34.710 } 00:09:34.710 ] 00:09:34.710 }' 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.710 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.968 [2024-10-01 14:34:26.522436] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.968 [2024-10-01 14:34:26.522473] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.968 [2024-10-01 14:34:26.525619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.968 [2024-10-01 14:34:26.525682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.968 [2024-10-01 14:34:26.525834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.968 [2024-10-01 14:34:26.525848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:09:34.968 { 00:09:34.968 "results": [ 00:09:34.968 { 00:09:34.968 "job": "raid_bdev1", 00:09:34.968 "core_mask": "0x1", 00:09:34.968 "workload": "randrw", 00:09:34.968 "percentage": 50, 00:09:34.968 "status": "finished", 00:09:34.968 "queue_depth": 1, 00:09:34.968 "io_size": 131072, 00:09:34.968 "runtime": 1.226152, 00:09:34.968 "iops": 10935.838297372593, 00:09:34.968 "mibps": 1366.9797871715741, 00:09:34.968 "io_failed": 0, 00:09:34.968 "io_timeout": 0, 00:09:34.968 "avg_latency_us": 88.24900841570243, 00:09:34.968 "min_latency_us": 29.53846153846154, 00:09:34.968 "max_latency_us": 1777.033846153846 00:09:34.968 } 00:09:34.968 ], 00:09:34.968 "core_count": 1 00:09:34.968 } 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73226 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73226 ']' 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73226 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73226 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73226' 00:09:34.968 killing process with pid 73226 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73226 00:09:34.968 [2024-10-01 14:34:26.553914] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:34.968 14:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73226 00:09:35.226 [2024-10-01 14:34:26.761980] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.159 14:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Hl0XQavxv9 00:09:36.159 14:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:36.159 14:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:36.159 14:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:36.159 14:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:36.159 14:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.159 14:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:36.159 14:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:36.159 00:09:36.159 real 0m3.882s 00:09:36.159 user 0m4.580s 00:09:36.159 sys 0m0.410s 00:09:36.159 14:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.159 14:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.159 ************************************ 00:09:36.159 END TEST raid_read_error_test 00:09:36.159 ************************************ 00:09:36.159 14:34:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:09:36.159 14:34:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:36.159 14:34:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.159 14:34:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.159 ************************************ 00:09:36.159 START TEST raid_write_error_test 00:09:36.159 ************************************ 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HeHrH4Od5Y 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73360 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73360 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73360 ']' 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.159 14:34:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.160 14:34:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:36.160 [2024-10-01 14:34:27.748853] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:09:36.160 [2024-10-01 14:34:27.748977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73360 ] 00:09:36.416 [2024-10-01 14:34:27.896361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.416 [2024-10-01 14:34:28.083326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.674 [2024-10-01 14:34:28.221683] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.674 [2024-10-01 14:34:28.221739] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.931 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.931 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:36.931 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:36.931 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:36.931 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.931 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.187 BaseBdev1_malloc 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.187 true 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.187 [2024-10-01 14:34:28.639019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:37.187 [2024-10-01 14:34:28.639080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.187 [2024-10-01 14:34:28.639104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:37.187 [2024-10-01 14:34:28.639121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.187 [2024-10-01 14:34:28.641400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.187 [2024-10-01 14:34:28.641468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:37.187 BaseBdev1 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.187 BaseBdev2_malloc 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.187 true 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.187 [2024-10-01 14:34:28.694086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:37.187 [2024-10-01 14:34:28.694275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.187 [2024-10-01 14:34:28.694306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:37.187 [2024-10-01 14:34:28.694321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.187 [2024-10-01 14:34:28.696629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.187 [2024-10-01 14:34:28.696669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:37.187 BaseBdev2 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.187 BaseBdev3_malloc 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.187 true 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.187 [2024-10-01 14:34:28.738374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:37.187 [2024-10-01 14:34:28.738430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.187 [2024-10-01 14:34:28.738455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:37.187 [2024-10-01 14:34:28.738470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.187 [2024-10-01 14:34:28.740837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.187 [2024-10-01 14:34:28.740966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:37.187 BaseBdev3 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.187 BaseBdev4_malloc 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.187 true 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.187 [2024-10-01 14:34:28.782676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:37.187 [2024-10-01 14:34:28.782754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.187 [2024-10-01 14:34:28.782781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:37.187 [2024-10-01 14:34:28.782800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.187 [2024-10-01 14:34:28.785145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.187 [2024-10-01 14:34:28.785194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:37.187 BaseBdev4 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.187 [2024-10-01 14:34:28.790770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.187 [2024-10-01 14:34:28.792631] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.187 [2024-10-01 14:34:28.792840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.187 [2024-10-01 14:34:28.792914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:37.187 [2024-10-01 14:34:28.793149] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:37.187 [2024-10-01 14:34:28.793161] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:37.187 [2024-10-01 14:34:28.793438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:37.187 [2024-10-01 14:34:28.793593] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:37.187 [2024-10-01 14:34:28.793602] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:37.187 [2024-10-01 14:34:28.793793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.187 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.188 "name": "raid_bdev1", 00:09:37.188 "uuid": "3263a122-c9d0-42b3-9929-662a1a2f6d75", 00:09:37.188 "strip_size_kb": 0, 00:09:37.188 "state": "online", 00:09:37.188 "raid_level": "raid1", 00:09:37.188 "superblock": true, 00:09:37.188 "num_base_bdevs": 4, 00:09:37.188 "num_base_bdevs_discovered": 4, 00:09:37.188 "num_base_bdevs_operational": 4, 00:09:37.188 "base_bdevs_list": [ 00:09:37.188 { 00:09:37.188 "name": "BaseBdev1", 00:09:37.188 "uuid": "7d9af38b-4f07-5871-9925-7e937753ea50", 00:09:37.188 "is_configured": true, 00:09:37.188 "data_offset": 2048, 00:09:37.188 "data_size": 63488 00:09:37.188 }, 00:09:37.188 { 00:09:37.188 "name": "BaseBdev2", 00:09:37.188 "uuid": "f4c16da1-c777-5706-af5d-17651aca1404", 00:09:37.188 "is_configured": true, 00:09:37.188 "data_offset": 2048, 00:09:37.188 "data_size": 63488 00:09:37.188 }, 00:09:37.188 { 00:09:37.188 "name": "BaseBdev3", 00:09:37.188 "uuid": "1996f1e7-9917-5dc2-a214-459bcdf3f4fa", 00:09:37.188 "is_configured": true, 00:09:37.188 "data_offset": 2048, 00:09:37.188 "data_size": 63488 00:09:37.188 }, 00:09:37.188 { 00:09:37.188 "name": "BaseBdev4", 00:09:37.188 "uuid": "33e92aae-0d8d-5c9f-b451-b9a8ab2fefec", 00:09:37.188 "is_configured": true, 00:09:37.188 "data_offset": 2048, 00:09:37.188 "data_size": 63488 00:09:37.188 } 00:09:37.188 ] 00:09:37.188 }' 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.188 14:34:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.445 14:34:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:37.445 14:34:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:37.702 [2024-10-01 14:34:29.219806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.641 [2024-10-01 14:34:30.130191] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:38.641 [2024-10-01 14:34:30.130253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.641 [2024-10-01 14:34:30.130458] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.641 "name": "raid_bdev1", 00:09:38.641 "uuid": "3263a122-c9d0-42b3-9929-662a1a2f6d75", 00:09:38.641 "strip_size_kb": 0, 00:09:38.641 "state": "online", 00:09:38.641 "raid_level": "raid1", 00:09:38.641 "superblock": true, 00:09:38.641 "num_base_bdevs": 4, 00:09:38.641 "num_base_bdevs_discovered": 3, 00:09:38.641 "num_base_bdevs_operational": 3, 00:09:38.641 "base_bdevs_list": [ 00:09:38.641 { 00:09:38.641 "name": null, 00:09:38.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.641 "is_configured": false, 00:09:38.641 "data_offset": 0, 00:09:38.641 "data_size": 63488 00:09:38.641 }, 00:09:38.641 { 00:09:38.641 "name": "BaseBdev2", 00:09:38.641 "uuid": "f4c16da1-c777-5706-af5d-17651aca1404", 00:09:38.641 "is_configured": true, 00:09:38.641 "data_offset": 2048, 00:09:38.641 "data_size": 63488 00:09:38.641 }, 00:09:38.641 { 00:09:38.641 "name": "BaseBdev3", 00:09:38.641 "uuid": "1996f1e7-9917-5dc2-a214-459bcdf3f4fa", 00:09:38.641 "is_configured": true, 00:09:38.641 "data_offset": 2048, 00:09:38.641 "data_size": 63488 00:09:38.641 }, 00:09:38.641 { 00:09:38.641 "name": "BaseBdev4", 00:09:38.641 "uuid": "33e92aae-0d8d-5c9f-b451-b9a8ab2fefec", 00:09:38.641 "is_configured": true, 00:09:38.641 "data_offset": 2048, 00:09:38.641 "data_size": 63488 00:09:38.641 } 00:09:38.641 ] 00:09:38.641 }' 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.641 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.898 [2024-10-01 14:34:30.445392] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:38.898 [2024-10-01 14:34:30.445612] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.898 [2024-10-01 14:34:30.448095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.898 [2024-10-01 14:34:30.448128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.898 [2024-10-01 14:34:30.448213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.898 [2024-10-01 14:34:30.448221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:09:38.898 { 00:09:38.898 "results": [ 00:09:38.898 { 00:09:38.898 "job": "raid_bdev1", 00:09:38.898 "core_mask": "0x1", 00:09:38.898 "workload": "randrw", 00:09:38.898 "percentage": 50, 00:09:38.898 "status": "finished", 00:09:38.898 "queue_depth": 1, 00:09:38.898 "io_size": 131072, 00:09:38.898 "runtime": 1.223682, 00:09:38.898 "iops": 12831.76511544666, 00:09:38.898 "mibps": 1603.9706394308325, 00:09:38.898 "io_failed": 0, 00:09:38.898 "io_timeout": 0, 00:09:38.898 "avg_latency_us": 75.08569256243692, 00:09:38.898 "min_latency_us": 23.433846153846154, 00:09:38.898 "max_latency_us": 1392.64 00:09:38.898 } 00:09:38.898 ], 00:09:38.898 "core_count": 1 00:09:38.898 } 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73360 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73360 ']' 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73360 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73360 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73360' 00:09:38.898 killing process with pid 73360 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73360 00:09:38.898 [2024-10-01 14:34:30.474526] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.898 14:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73360 00:09:39.155 [2024-10-01 14:34:30.635361] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:39.718 14:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:39.718 14:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HeHrH4Od5Y 00:09:39.718 14:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:39.718 14:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:39.718 14:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:39.718 14:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.718 14:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:39.718 14:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:39.718 ************************************ 00:09:39.718 END TEST raid_write_error_test 00:09:39.718 ************************************ 00:09:39.718 00:09:39.718 real 0m3.654s 00:09:39.718 user 0m4.342s 00:09:39.718 sys 0m0.401s 00:09:39.718 14:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.718 14:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.718 14:34:31 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:09:39.718 14:34:31 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:09:39.718 14:34:31 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:09:39.718 14:34:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:39.718 14:34:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.718 14:34:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.718 ************************************ 00:09:39.718 START TEST raid_rebuild_test 00:09:39.718 ************************************ 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:09:39.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=73493 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 73493 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 73493 ']' 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.718 14:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:09:39.975 I/O size of 3145728 is greater than zero copy threshold (65536). 00:09:39.975 Zero copy mechanism will not be used. 00:09:39.975 [2024-10-01 14:34:31.447065] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:09:39.975 [2024-10-01 14:34:31.447189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73493 ] 00:09:39.975 [2024-10-01 14:34:31.593898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.233 [2024-10-01 14:34:31.781677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.491 [2024-10-01 14:34:31.917518] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.491 [2024-10-01 14:34:31.917700] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.748 BaseBdev1_malloc 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.748 [2024-10-01 14:34:32.291347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:09:40.748 [2024-10-01 14:34:32.291413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.748 [2024-10-01 14:34:32.291434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:40.748 [2024-10-01 14:34:32.291447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.748 [2024-10-01 14:34:32.293581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.748 [2024-10-01 14:34:32.293761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:40.748 BaseBdev1 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.748 BaseBdev2_malloc 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.748 [2024-10-01 14:34:32.348498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:09:40.748 [2024-10-01 14:34:32.348727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.748 [2024-10-01 14:34:32.348753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:40.748 [2024-10-01 14:34:32.348766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.748 [2024-10-01 14:34:32.350888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.748 [2024-10-01 14:34:32.350924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:40.748 BaseBdev2 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.748 spare_malloc 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.748 spare_delay 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.748 [2024-10-01 14:34:32.396555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:09:40.748 [2024-10-01 14:34:32.396614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.748 [2024-10-01 14:34:32.396631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:40.748 [2024-10-01 14:34:32.396641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.748 [2024-10-01 14:34:32.398780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.748 [2024-10-01 14:34:32.398969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:09:40.748 spare 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.748 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.748 [2024-10-01 14:34:32.404595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.749 [2024-10-01 14:34:32.406458] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.749 [2024-10-01 14:34:32.406632] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:40.749 [2024-10-01 14:34:32.406648] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:40.749 [2024-10-01 14:34:32.406942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:40.749 [2024-10-01 14:34:32.407080] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:40.749 [2024-10-01 14:34:32.407089] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:40.749 [2024-10-01 14:34:32.407223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.749 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.006 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.006 "name": "raid_bdev1", 00:09:41.006 "uuid": "79e9fb33-cd9f-40ce-a46e-7ce29c2d8a2a", 00:09:41.006 "strip_size_kb": 0, 00:09:41.006 "state": "online", 00:09:41.006 "raid_level": "raid1", 00:09:41.006 "superblock": false, 00:09:41.006 "num_base_bdevs": 2, 00:09:41.006 "num_base_bdevs_discovered": 2, 00:09:41.006 "num_base_bdevs_operational": 2, 00:09:41.006 "base_bdevs_list": [ 00:09:41.006 { 00:09:41.006 "name": "BaseBdev1", 00:09:41.006 "uuid": "86e6c678-d545-5c37-afca-eee19f94ac76", 00:09:41.006 "is_configured": true, 00:09:41.006 "data_offset": 0, 00:09:41.006 "data_size": 65536 00:09:41.006 }, 00:09:41.006 { 00:09:41.006 "name": "BaseBdev2", 00:09:41.006 "uuid": "2b87c3af-b847-53d4-86b7-010d5bf74312", 00:09:41.006 "is_configured": true, 00:09:41.006 "data_offset": 0, 00:09:41.006 "data_size": 65536 00:09:41.006 } 00:09:41.006 ] 00:09:41.006 }' 00:09:41.006 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.006 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:09:41.263 [2024-10-01 14:34:32.744994] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:41.263 14:34:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:09:41.521 [2024-10-01 14:34:33.004809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:41.521 /dev/nbd0 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:41.521 1+0 records in 00:09:41.521 1+0 records out 00:09:41.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431395 s, 9.5 MB/s 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:09:41.521 14:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:09:46.848 65536+0 records in 00:09:46.848 65536+0 records out 00:09:46.848 33554432 bytes (34 MB, 32 MiB) copied, 4.80786 s, 7.0 MB/s 00:09:46.848 14:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:46.848 14:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:46.848 14:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:46.848 14:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:46.848 14:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:09:46.848 14:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.848 14:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:46.848 [2024-10-01 14:34:38.067045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.848 [2024-10-01 14:34:38.095115] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.848 "name": "raid_bdev1", 00:09:46.848 "uuid": "79e9fb33-cd9f-40ce-a46e-7ce29c2d8a2a", 00:09:46.848 "strip_size_kb": 0, 00:09:46.848 "state": "online", 00:09:46.848 "raid_level": "raid1", 00:09:46.848 "superblock": false, 00:09:46.848 "num_base_bdevs": 2, 00:09:46.848 "num_base_bdevs_discovered": 1, 00:09:46.848 "num_base_bdevs_operational": 1, 00:09:46.848 "base_bdevs_list": [ 00:09:46.848 { 00:09:46.848 "name": null, 00:09:46.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.848 "is_configured": false, 00:09:46.848 "data_offset": 0, 00:09:46.848 "data_size": 65536 00:09:46.848 }, 00:09:46.848 { 00:09:46.848 "name": "BaseBdev2", 00:09:46.848 "uuid": "2b87c3af-b847-53d4-86b7-010d5bf74312", 00:09:46.848 "is_configured": true, 00:09:46.848 "data_offset": 0, 00:09:46.848 "data_size": 65536 00:09:46.848 } 00:09:46.848 ] 00:09:46.848 }' 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.848 [2024-10-01 14:34:38.459215] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:09:46.848 [2024-10-01 14:34:38.468197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.848 14:34:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:09:46.848 [2024-10-01 14:34:38.469800] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:48.222 "name": "raid_bdev1", 00:09:48.222 "uuid": "79e9fb33-cd9f-40ce-a46e-7ce29c2d8a2a", 00:09:48.222 "strip_size_kb": 0, 00:09:48.222 "state": "online", 00:09:48.222 "raid_level": "raid1", 00:09:48.222 "superblock": false, 00:09:48.222 "num_base_bdevs": 2, 00:09:48.222 "num_base_bdevs_discovered": 2, 00:09:48.222 "num_base_bdevs_operational": 2, 00:09:48.222 "process": { 00:09:48.222 "type": "rebuild", 00:09:48.222 "target": "spare", 00:09:48.222 "progress": { 00:09:48.222 "blocks": 20480, 00:09:48.222 "percent": 31 00:09:48.222 } 00:09:48.222 }, 00:09:48.222 "base_bdevs_list": [ 00:09:48.222 { 00:09:48.222 "name": "spare", 00:09:48.222 "uuid": "583c0ef2-1844-5cb7-8ecf-50f902fc6b22", 00:09:48.222 "is_configured": true, 00:09:48.222 "data_offset": 0, 00:09:48.222 "data_size": 65536 00:09:48.222 }, 00:09:48.222 { 00:09:48.222 "name": "BaseBdev2", 00:09:48.222 "uuid": "2b87c3af-b847-53d4-86b7-010d5bf74312", 00:09:48.222 "is_configured": true, 00:09:48.222 "data_offset": 0, 00:09:48.222 "data_size": 65536 00:09:48.222 } 00:09:48.222 ] 00:09:48.222 }' 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.222 [2024-10-01 14:34:39.584148] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:09:48.222 [2024-10-01 14:34:39.675466] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:09:48.222 [2024-10-01 14:34:39.675539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.222 [2024-10-01 14:34:39.675551] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:09:48.222 [2024-10-01 14:34:39.675559] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.222 "name": "raid_bdev1", 00:09:48.222 "uuid": "79e9fb33-cd9f-40ce-a46e-7ce29c2d8a2a", 00:09:48.222 "strip_size_kb": 0, 00:09:48.222 "state": "online", 00:09:48.222 "raid_level": "raid1", 00:09:48.222 "superblock": false, 00:09:48.222 "num_base_bdevs": 2, 00:09:48.222 "num_base_bdevs_discovered": 1, 00:09:48.222 "num_base_bdevs_operational": 1, 00:09:48.222 "base_bdevs_list": [ 00:09:48.222 { 00:09:48.222 "name": null, 00:09:48.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.222 "is_configured": false, 00:09:48.222 "data_offset": 0, 00:09:48.222 "data_size": 65536 00:09:48.222 }, 00:09:48.222 { 00:09:48.222 "name": "BaseBdev2", 00:09:48.222 "uuid": "2b87c3af-b847-53d4-86b7-010d5bf74312", 00:09:48.222 "is_configured": true, 00:09:48.222 "data_offset": 0, 00:09:48.222 "data_size": 65536 00:09:48.222 } 00:09:48.222 ] 00:09:48.222 }' 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.222 14:34:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:48.481 "name": "raid_bdev1", 00:09:48.481 "uuid": "79e9fb33-cd9f-40ce-a46e-7ce29c2d8a2a", 00:09:48.481 "strip_size_kb": 0, 00:09:48.481 "state": "online", 00:09:48.481 "raid_level": "raid1", 00:09:48.481 "superblock": false, 00:09:48.481 "num_base_bdevs": 2, 00:09:48.481 "num_base_bdevs_discovered": 1, 00:09:48.481 "num_base_bdevs_operational": 1, 00:09:48.481 "base_bdevs_list": [ 00:09:48.481 { 00:09:48.481 "name": null, 00:09:48.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.481 "is_configured": false, 00:09:48.481 "data_offset": 0, 00:09:48.481 "data_size": 65536 00:09:48.481 }, 00:09:48.481 { 00:09:48.481 "name": "BaseBdev2", 00:09:48.481 "uuid": "2b87c3af-b847-53d4-86b7-010d5bf74312", 00:09:48.481 "is_configured": true, 00:09:48.481 "data_offset": 0, 00:09:48.481 "data_size": 65536 00:09:48.481 } 00:09:48.481 ] 00:09:48.481 }' 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.481 [2024-10-01 14:34:40.127591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:09:48.481 [2024-10-01 14:34:40.136396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.481 14:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:09:48.481 [2024-10-01 14:34:40.137938] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:49.876 "name": "raid_bdev1", 00:09:49.876 "uuid": "79e9fb33-cd9f-40ce-a46e-7ce29c2d8a2a", 00:09:49.876 "strip_size_kb": 0, 00:09:49.876 "state": "online", 00:09:49.876 "raid_level": "raid1", 00:09:49.876 "superblock": false, 00:09:49.876 "num_base_bdevs": 2, 00:09:49.876 "num_base_bdevs_discovered": 2, 00:09:49.876 "num_base_bdevs_operational": 2, 00:09:49.876 "process": { 00:09:49.876 "type": "rebuild", 00:09:49.876 "target": "spare", 00:09:49.876 "progress": { 00:09:49.876 "blocks": 20480, 00:09:49.876 "percent": 31 00:09:49.876 } 00:09:49.876 }, 00:09:49.876 "base_bdevs_list": [ 00:09:49.876 { 00:09:49.876 "name": "spare", 00:09:49.876 "uuid": "583c0ef2-1844-5cb7-8ecf-50f902fc6b22", 00:09:49.876 "is_configured": true, 00:09:49.876 "data_offset": 0, 00:09:49.876 "data_size": 65536 00:09:49.876 }, 00:09:49.876 { 00:09:49.876 "name": "BaseBdev2", 00:09:49.876 "uuid": "2b87c3af-b847-53d4-86b7-010d5bf74312", 00:09:49.876 "is_configured": true, 00:09:49.876 "data_offset": 0, 00:09:49.876 "data_size": 65536 00:09:49.876 } 00:09:49.876 ] 00:09:49.876 }' 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=301 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:49.876 "name": "raid_bdev1", 00:09:49.876 "uuid": "79e9fb33-cd9f-40ce-a46e-7ce29c2d8a2a", 00:09:49.876 "strip_size_kb": 0, 00:09:49.876 "state": "online", 00:09:49.876 "raid_level": "raid1", 00:09:49.876 "superblock": false, 00:09:49.876 "num_base_bdevs": 2, 00:09:49.876 "num_base_bdevs_discovered": 2, 00:09:49.876 "num_base_bdevs_operational": 2, 00:09:49.876 "process": { 00:09:49.876 "type": "rebuild", 00:09:49.876 "target": "spare", 00:09:49.876 "progress": { 00:09:49.876 "blocks": 22528, 00:09:49.876 "percent": 34 00:09:49.876 } 00:09:49.876 }, 00:09:49.876 "base_bdevs_list": [ 00:09:49.876 { 00:09:49.876 "name": "spare", 00:09:49.876 "uuid": "583c0ef2-1844-5cb7-8ecf-50f902fc6b22", 00:09:49.876 "is_configured": true, 00:09:49.876 "data_offset": 0, 00:09:49.876 "data_size": 65536 00:09:49.876 }, 00:09:49.876 { 00:09:49.876 "name": "BaseBdev2", 00:09:49.876 "uuid": "2b87c3af-b847-53d4-86b7-010d5bf74312", 00:09:49.876 "is_configured": true, 00:09:49.876 "data_offset": 0, 00:09:49.876 "data_size": 65536 00:09:49.876 } 00:09:49.876 ] 00:09:49.876 }' 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:49.876 14:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:50.810 "name": "raid_bdev1", 00:09:50.810 "uuid": "79e9fb33-cd9f-40ce-a46e-7ce29c2d8a2a", 00:09:50.810 "strip_size_kb": 0, 00:09:50.810 "state": "online", 00:09:50.810 "raid_level": "raid1", 00:09:50.810 "superblock": false, 00:09:50.810 "num_base_bdevs": 2, 00:09:50.810 "num_base_bdevs_discovered": 2, 00:09:50.810 "num_base_bdevs_operational": 2, 00:09:50.810 "process": { 00:09:50.810 "type": "rebuild", 00:09:50.810 "target": "spare", 00:09:50.810 "progress": { 00:09:50.810 "blocks": 43008, 00:09:50.810 "percent": 65 00:09:50.810 } 00:09:50.810 }, 00:09:50.810 "base_bdevs_list": [ 00:09:50.810 { 00:09:50.810 "name": "spare", 00:09:50.810 "uuid": "583c0ef2-1844-5cb7-8ecf-50f902fc6b22", 00:09:50.810 "is_configured": true, 00:09:50.810 "data_offset": 0, 00:09:50.810 "data_size": 65536 00:09:50.810 }, 00:09:50.810 { 00:09:50.810 "name": "BaseBdev2", 00:09:50.810 "uuid": "2b87c3af-b847-53d4-86b7-010d5bf74312", 00:09:50.810 "is_configured": true, 00:09:50.810 "data_offset": 0, 00:09:50.810 "data_size": 65536 00:09:50.810 } 00:09:50.810 ] 00:09:50.810 }' 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:09:50.810 14:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:09:51.742 [2024-10-01 14:34:43.352489] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:09:51.742 [2024-10-01 14:34:43.352557] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:09:51.742 [2024-10-01 14:34:43.352600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:51.999 "name": "raid_bdev1", 00:09:51.999 "uuid": "79e9fb33-cd9f-40ce-a46e-7ce29c2d8a2a", 00:09:51.999 "strip_size_kb": 0, 00:09:51.999 "state": "online", 00:09:51.999 "raid_level": "raid1", 00:09:51.999 "superblock": false, 00:09:51.999 "num_base_bdevs": 2, 00:09:51.999 "num_base_bdevs_discovered": 2, 00:09:51.999 "num_base_bdevs_operational": 2, 00:09:51.999 "base_bdevs_list": [ 00:09:51.999 { 00:09:51.999 "name": "spare", 00:09:51.999 "uuid": "583c0ef2-1844-5cb7-8ecf-50f902fc6b22", 00:09:51.999 "is_configured": true, 00:09:51.999 "data_offset": 0, 00:09:51.999 "data_size": 65536 00:09:51.999 }, 00:09:51.999 { 00:09:51.999 "name": "BaseBdev2", 00:09:51.999 "uuid": "2b87c3af-b847-53d4-86b7-010d5bf74312", 00:09:51.999 "is_configured": true, 00:09:51.999 "data_offset": 0, 00:09:51.999 "data_size": 65536 00:09:51.999 } 00:09:51.999 ] 00:09:51.999 }' 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:51.999 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:09:52.000 "name": "raid_bdev1", 00:09:52.000 "uuid": "79e9fb33-cd9f-40ce-a46e-7ce29c2d8a2a", 00:09:52.000 "strip_size_kb": 0, 00:09:52.000 "state": "online", 00:09:52.000 "raid_level": "raid1", 00:09:52.000 "superblock": false, 00:09:52.000 "num_base_bdevs": 2, 00:09:52.000 "num_base_bdevs_discovered": 2, 00:09:52.000 "num_base_bdevs_operational": 2, 00:09:52.000 "base_bdevs_list": [ 00:09:52.000 { 00:09:52.000 "name": "spare", 00:09:52.000 "uuid": "583c0ef2-1844-5cb7-8ecf-50f902fc6b22", 00:09:52.000 "is_configured": true, 00:09:52.000 "data_offset": 0, 00:09:52.000 "data_size": 65536 00:09:52.000 }, 00:09:52.000 { 00:09:52.000 "name": "BaseBdev2", 00:09:52.000 "uuid": "2b87c3af-b847-53d4-86b7-010d5bf74312", 00:09:52.000 "is_configured": true, 00:09:52.000 "data_offset": 0, 00:09:52.000 "data_size": 65536 00:09:52.000 } 00:09:52.000 ] 00:09:52.000 }' 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.000 "name": "raid_bdev1", 00:09:52.000 "uuid": "79e9fb33-cd9f-40ce-a46e-7ce29c2d8a2a", 00:09:52.000 "strip_size_kb": 0, 00:09:52.000 "state": "online", 00:09:52.000 "raid_level": "raid1", 00:09:52.000 "superblock": false, 00:09:52.000 "num_base_bdevs": 2, 00:09:52.000 "num_base_bdevs_discovered": 2, 00:09:52.000 "num_base_bdevs_operational": 2, 00:09:52.000 "base_bdevs_list": [ 00:09:52.000 { 00:09:52.000 "name": "spare", 00:09:52.000 "uuid": "583c0ef2-1844-5cb7-8ecf-50f902fc6b22", 00:09:52.000 "is_configured": true, 00:09:52.000 "data_offset": 0, 00:09:52.000 "data_size": 65536 00:09:52.000 }, 00:09:52.000 { 00:09:52.000 "name": "BaseBdev2", 00:09:52.000 "uuid": "2b87c3af-b847-53d4-86b7-010d5bf74312", 00:09:52.000 "is_configured": true, 00:09:52.000 "data_offset": 0, 00:09:52.000 "data_size": 65536 00:09:52.000 } 00:09:52.000 ] 00:09:52.000 }' 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.000 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.567 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:52.567 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.567 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.567 [2024-10-01 14:34:43.952455] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.567 [2024-10-01 14:34:43.952484] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.567 [2024-10-01 14:34:43.952546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.567 [2024-10-01 14:34:43.952608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.567 [2024-10-01 14:34:43.952617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:52.567 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.567 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.567 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.567 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.567 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.568 14:34:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:09:52.568 /dev/nbd0 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.568 1+0 records in 00:09:52.568 1+0 records out 00:09:52.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248025 s, 16.5 MB/s 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.568 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:09:52.825 /dev/nbd1 00:09:52.825 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:52.825 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:52.825 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:52.825 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:09:52.825 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:52.825 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:52.826 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:52.826 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:09:52.826 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:52.826 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:52.826 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.826 1+0 records in 00:09:52.826 1+0 records out 00:09:52.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282805 s, 14.5 MB/s 00:09:52.826 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.826 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:09:52.826 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.826 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:52.826 14:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:09:52.826 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.826 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.826 14:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:09:53.084 14:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:09:53.084 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:53.084 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.084 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:53.084 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:09:53.084 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.084 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:53.345 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:53.345 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:53.345 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:53.345 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.345 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.345 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:53.345 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:09:53.345 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.345 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.345 14:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 73493 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 73493 ']' 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 73493 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73493 00:09:53.605 killing process with pid 73493 00:09:53.605 Received shutdown signal, test time was about 60.000000 seconds 00:09:53.605 00:09:53.605 Latency(us) 00:09:53.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.605 =================================================================================================================== 00:09:53.605 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73493' 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 73493 00:09:53.605 [2024-10-01 14:34:45.121169] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.605 14:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 73493 00:09:53.866 [2024-10-01 14:34:45.309807] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.828 ************************************ 00:09:54.828 END TEST raid_rebuild_test 00:09:54.828 ************************************ 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:09:54.828 00:09:54.828 real 0m14.754s 00:09:54.828 user 0m16.081s 00:09:54.828 sys 0m2.734s 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.828 14:34:46 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:09:54.828 14:34:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:54.828 14:34:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.828 14:34:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.828 ************************************ 00:09:54.828 START TEST raid_rebuild_test_sb 00:09:54.828 ************************************ 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=73912 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 73912 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73912 ']' 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.828 14:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.828 I/O size of 3145728 is greater than zero copy threshold (65536). 00:09:54.828 Zero copy mechanism will not be used. 00:09:54.828 [2024-10-01 14:34:46.261333] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:09:54.828 [2024-10-01 14:34:46.261492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73912 ] 00:09:54.828 [2024-10-01 14:34:46.411211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.086 [2024-10-01 14:34:46.597133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.086 [2024-10-01 14:34:46.732204] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.086 [2024-10-01 14:34:46.732243] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.652 BaseBdev1_malloc 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.652 [2024-10-01 14:34:47.216543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:09:55.652 [2024-10-01 14:34:47.216612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.652 [2024-10-01 14:34:47.216634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:55.652 [2024-10-01 14:34:47.216646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.652 [2024-10-01 14:34:47.218882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.652 [2024-10-01 14:34:47.219075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:55.652 BaseBdev1 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.652 BaseBdev2_malloc 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.652 [2024-10-01 14:34:47.263688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:09:55.652 [2024-10-01 14:34:47.263767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.652 [2024-10-01 14:34:47.263786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:55.652 [2024-10-01 14:34:47.263796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.652 [2024-10-01 14:34:47.265981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.652 [2024-10-01 14:34:47.266018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:55.652 BaseBdev2 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.652 spare_malloc 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.652 spare_delay 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.652 [2024-10-01 14:34:47.307485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:09:55.652 [2024-10-01 14:34:47.307723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.652 [2024-10-01 14:34:47.307747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:55.652 [2024-10-01 14:34:47.307757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.652 [2024-10-01 14:34:47.309874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.652 [2024-10-01 14:34:47.309908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:09:55.652 spare 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.652 [2024-10-01 14:34:47.315536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.652 [2024-10-01 14:34:47.317429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.652 [2024-10-01 14:34:47.317595] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:55.652 [2024-10-01 14:34:47.317608] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:55.652 [2024-10-01 14:34:47.317887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:55.652 [2024-10-01 14:34:47.318030] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:55.652 [2024-10-01 14:34:47.318039] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:55.652 [2024-10-01 14:34:47.318169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.652 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.911 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.911 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.911 "name": "raid_bdev1", 00:09:55.911 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:09:55.911 "strip_size_kb": 0, 00:09:55.911 "state": "online", 00:09:55.911 "raid_level": "raid1", 00:09:55.911 "superblock": true, 00:09:55.911 "num_base_bdevs": 2, 00:09:55.911 "num_base_bdevs_discovered": 2, 00:09:55.911 "num_base_bdevs_operational": 2, 00:09:55.911 "base_bdevs_list": [ 00:09:55.911 { 00:09:55.911 "name": "BaseBdev1", 00:09:55.911 "uuid": "133b765a-1d21-513d-89d9-7c3b5a1cf391", 00:09:55.911 "is_configured": true, 00:09:55.911 "data_offset": 2048, 00:09:55.911 "data_size": 63488 00:09:55.911 }, 00:09:55.911 { 00:09:55.911 "name": "BaseBdev2", 00:09:55.911 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:09:55.911 "is_configured": true, 00:09:55.912 "data_offset": 2048, 00:09:55.912 "data_size": 63488 00:09:55.912 } 00:09:55.912 ] 00:09:55.912 }' 00:09:55.912 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.912 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:09:56.169 [2024-10-01 14:34:47.631922] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:56.169 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:09:56.427 [2024-10-01 14:34:47.903767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:56.427 /dev/nbd0 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:56.427 1+0 records in 00:09:56.427 1+0 records out 00:09:56.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449911 s, 9.1 MB/s 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:09:56.427 14:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:10:01.685 63488+0 records in 00:10:01.685 63488+0 records out 00:10:01.685 32505856 bytes (33 MB, 31 MiB) copied, 4.95271 s, 6.6 MB/s 00:10:01.685 14:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:01.685 14:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:01.686 14:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:01.686 14:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:01.686 14:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:01.686 14:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.686 14:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:01.686 [2024-10-01 14:34:53.106373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.686 [2024-10-01 14:34:53.134452] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.686 "name": "raid_bdev1", 00:10:01.686 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:01.686 "strip_size_kb": 0, 00:10:01.686 "state": "online", 00:10:01.686 "raid_level": "raid1", 00:10:01.686 "superblock": true, 00:10:01.686 "num_base_bdevs": 2, 00:10:01.686 "num_base_bdevs_discovered": 1, 00:10:01.686 "num_base_bdevs_operational": 1, 00:10:01.686 "base_bdevs_list": [ 00:10:01.686 { 00:10:01.686 "name": null, 00:10:01.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.686 "is_configured": false, 00:10:01.686 "data_offset": 0, 00:10:01.686 "data_size": 63488 00:10:01.686 }, 00:10:01.686 { 00:10:01.686 "name": "BaseBdev2", 00:10:01.686 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:01.686 "is_configured": true, 00:10:01.686 "data_offset": 2048, 00:10:01.686 "data_size": 63488 00:10:01.686 } 00:10:01.686 ] 00:10:01.686 }' 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.686 14:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.943 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:01.943 14:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.943 14:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.943 [2024-10-01 14:34:53.434537] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:01.943 [2024-10-01 14:34:53.445203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:10:01.943 14:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.943 14:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:01.943 [2024-10-01 14:34:53.447175] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:02.942 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:02.942 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:02.942 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:02.942 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:02.942 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:02.942 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.942 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.942 14:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.942 14:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.942 14:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.942 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:02.942 "name": "raid_bdev1", 00:10:02.942 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:02.942 "strip_size_kb": 0, 00:10:02.942 "state": "online", 00:10:02.942 "raid_level": "raid1", 00:10:02.942 "superblock": true, 00:10:02.942 "num_base_bdevs": 2, 00:10:02.942 "num_base_bdevs_discovered": 2, 00:10:02.942 "num_base_bdevs_operational": 2, 00:10:02.942 "process": { 00:10:02.942 "type": "rebuild", 00:10:02.942 "target": "spare", 00:10:02.942 "progress": { 00:10:02.942 "blocks": 20480, 00:10:02.942 "percent": 32 00:10:02.942 } 00:10:02.942 }, 00:10:02.942 "base_bdevs_list": [ 00:10:02.942 { 00:10:02.942 "name": "spare", 00:10:02.942 "uuid": "c894824d-2cb3-51f2-857a-2702b253a1e9", 00:10:02.942 "is_configured": true, 00:10:02.942 "data_offset": 2048, 00:10:02.942 "data_size": 63488 00:10:02.942 }, 00:10:02.942 { 00:10:02.942 "name": "BaseBdev2", 00:10:02.942 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:02.942 "is_configured": true, 00:10:02.942 "data_offset": 2048, 00:10:02.942 "data_size": 63488 00:10:02.942 } 00:10:02.942 ] 00:10:02.942 }' 00:10:02.942 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:02.943 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:02.943 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:02.943 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:02.943 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:02.943 14:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.943 14:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.943 [2024-10-01 14:34:54.552779] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:03.218 [2024-10-01 14:34:54.652723] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:03.218 [2024-10-01 14:34:54.652801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.218 [2024-10-01 14:34:54.652816] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:03.218 [2024-10-01 14:34:54.652826] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.218 "name": "raid_bdev1", 00:10:03.218 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:03.218 "strip_size_kb": 0, 00:10:03.218 "state": "online", 00:10:03.218 "raid_level": "raid1", 00:10:03.218 "superblock": true, 00:10:03.218 "num_base_bdevs": 2, 00:10:03.218 "num_base_bdevs_discovered": 1, 00:10:03.218 "num_base_bdevs_operational": 1, 00:10:03.218 "base_bdevs_list": [ 00:10:03.218 { 00:10:03.218 "name": null, 00:10:03.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.218 "is_configured": false, 00:10:03.218 "data_offset": 0, 00:10:03.218 "data_size": 63488 00:10:03.218 }, 00:10:03.218 { 00:10:03.218 "name": "BaseBdev2", 00:10:03.218 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:03.218 "is_configured": true, 00:10:03.218 "data_offset": 2048, 00:10:03.218 "data_size": 63488 00:10:03.218 } 00:10:03.218 ] 00:10:03.218 }' 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.218 14:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.476 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:03.476 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:03.476 14:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:03.476 14:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:03.477 "name": "raid_bdev1", 00:10:03.477 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:03.477 "strip_size_kb": 0, 00:10:03.477 "state": "online", 00:10:03.477 "raid_level": "raid1", 00:10:03.477 "superblock": true, 00:10:03.477 "num_base_bdevs": 2, 00:10:03.477 "num_base_bdevs_discovered": 1, 00:10:03.477 "num_base_bdevs_operational": 1, 00:10:03.477 "base_bdevs_list": [ 00:10:03.477 { 00:10:03.477 "name": null, 00:10:03.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.477 "is_configured": false, 00:10:03.477 "data_offset": 0, 00:10:03.477 "data_size": 63488 00:10:03.477 }, 00:10:03.477 { 00:10:03.477 "name": "BaseBdev2", 00:10:03.477 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:03.477 "is_configured": true, 00:10:03.477 "data_offset": 2048, 00:10:03.477 "data_size": 63488 00:10:03.477 } 00:10:03.477 ] 00:10:03.477 }' 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.477 [2024-10-01 14:34:55.107321] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:03.477 [2024-10-01 14:34:55.117295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.477 14:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:03.477 [2024-10-01 14:34:55.119198] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:04.848 "name": "raid_bdev1", 00:10:04.848 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:04.848 "strip_size_kb": 0, 00:10:04.848 "state": "online", 00:10:04.848 "raid_level": "raid1", 00:10:04.848 "superblock": true, 00:10:04.848 "num_base_bdevs": 2, 00:10:04.848 "num_base_bdevs_discovered": 2, 00:10:04.848 "num_base_bdevs_operational": 2, 00:10:04.848 "process": { 00:10:04.848 "type": "rebuild", 00:10:04.848 "target": "spare", 00:10:04.848 "progress": { 00:10:04.848 "blocks": 20480, 00:10:04.848 "percent": 32 00:10:04.848 } 00:10:04.848 }, 00:10:04.848 "base_bdevs_list": [ 00:10:04.848 { 00:10:04.848 "name": "spare", 00:10:04.848 "uuid": "c894824d-2cb3-51f2-857a-2702b253a1e9", 00:10:04.848 "is_configured": true, 00:10:04.848 "data_offset": 2048, 00:10:04.848 "data_size": 63488 00:10:04.848 }, 00:10:04.848 { 00:10:04.848 "name": "BaseBdev2", 00:10:04.848 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:04.848 "is_configured": true, 00:10:04.848 "data_offset": 2048, 00:10:04.848 "data_size": 63488 00:10:04.848 } 00:10:04.848 ] 00:10:04.848 }' 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:10:04.848 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=316 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:04.848 "name": "raid_bdev1", 00:10:04.848 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:04.848 "strip_size_kb": 0, 00:10:04.848 "state": "online", 00:10:04.848 "raid_level": "raid1", 00:10:04.848 "superblock": true, 00:10:04.848 "num_base_bdevs": 2, 00:10:04.848 "num_base_bdevs_discovered": 2, 00:10:04.848 "num_base_bdevs_operational": 2, 00:10:04.848 "process": { 00:10:04.848 "type": "rebuild", 00:10:04.848 "target": "spare", 00:10:04.848 "progress": { 00:10:04.848 "blocks": 22528, 00:10:04.848 "percent": 35 00:10:04.848 } 00:10:04.848 }, 00:10:04.848 "base_bdevs_list": [ 00:10:04.848 { 00:10:04.848 "name": "spare", 00:10:04.848 "uuid": "c894824d-2cb3-51f2-857a-2702b253a1e9", 00:10:04.848 "is_configured": true, 00:10:04.848 "data_offset": 2048, 00:10:04.848 "data_size": 63488 00:10:04.848 }, 00:10:04.848 { 00:10:04.848 "name": "BaseBdev2", 00:10:04.848 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:04.848 "is_configured": true, 00:10:04.848 "data_offset": 2048, 00:10:04.848 "data_size": 63488 00:10:04.848 } 00:10:04.848 ] 00:10:04.848 }' 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:04.848 14:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:05.779 "name": "raid_bdev1", 00:10:05.779 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:05.779 "strip_size_kb": 0, 00:10:05.779 "state": "online", 00:10:05.779 "raid_level": "raid1", 00:10:05.779 "superblock": true, 00:10:05.779 "num_base_bdevs": 2, 00:10:05.779 "num_base_bdevs_discovered": 2, 00:10:05.779 "num_base_bdevs_operational": 2, 00:10:05.779 "process": { 00:10:05.779 "type": "rebuild", 00:10:05.779 "target": "spare", 00:10:05.779 "progress": { 00:10:05.779 "blocks": 45056, 00:10:05.779 "percent": 70 00:10:05.779 } 00:10:05.779 }, 00:10:05.779 "base_bdevs_list": [ 00:10:05.779 { 00:10:05.779 "name": "spare", 00:10:05.779 "uuid": "c894824d-2cb3-51f2-857a-2702b253a1e9", 00:10:05.779 "is_configured": true, 00:10:05.779 "data_offset": 2048, 00:10:05.779 "data_size": 63488 00:10:05.779 }, 00:10:05.779 { 00:10:05.779 "name": "BaseBdev2", 00:10:05.779 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:05.779 "is_configured": true, 00:10:05.779 "data_offset": 2048, 00:10:05.779 "data_size": 63488 00:10:05.779 } 00:10:05.779 ] 00:10:05.779 }' 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:05.779 14:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:06.708 [2024-10-01 14:34:58.232925] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:06.708 [2024-10-01 14:34:58.232997] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:06.708 [2024-10-01 14:34:58.233088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.965 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:06.965 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:06.965 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:06.965 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:06.965 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:06.965 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:06.965 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.965 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.965 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:06.966 "name": "raid_bdev1", 00:10:06.966 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:06.966 "strip_size_kb": 0, 00:10:06.966 "state": "online", 00:10:06.966 "raid_level": "raid1", 00:10:06.966 "superblock": true, 00:10:06.966 "num_base_bdevs": 2, 00:10:06.966 "num_base_bdevs_discovered": 2, 00:10:06.966 "num_base_bdevs_operational": 2, 00:10:06.966 "base_bdevs_list": [ 00:10:06.966 { 00:10:06.966 "name": "spare", 00:10:06.966 "uuid": "c894824d-2cb3-51f2-857a-2702b253a1e9", 00:10:06.966 "is_configured": true, 00:10:06.966 "data_offset": 2048, 00:10:06.966 "data_size": 63488 00:10:06.966 }, 00:10:06.966 { 00:10:06.966 "name": "BaseBdev2", 00:10:06.966 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:06.966 "is_configured": true, 00:10:06.966 "data_offset": 2048, 00:10:06.966 "data_size": 63488 00:10:06.966 } 00:10:06.966 ] 00:10:06.966 }' 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:06.966 "name": "raid_bdev1", 00:10:06.966 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:06.966 "strip_size_kb": 0, 00:10:06.966 "state": "online", 00:10:06.966 "raid_level": "raid1", 00:10:06.966 "superblock": true, 00:10:06.966 "num_base_bdevs": 2, 00:10:06.966 "num_base_bdevs_discovered": 2, 00:10:06.966 "num_base_bdevs_operational": 2, 00:10:06.966 "base_bdevs_list": [ 00:10:06.966 { 00:10:06.966 "name": "spare", 00:10:06.966 "uuid": "c894824d-2cb3-51f2-857a-2702b253a1e9", 00:10:06.966 "is_configured": true, 00:10:06.966 "data_offset": 2048, 00:10:06.966 "data_size": 63488 00:10:06.966 }, 00:10:06.966 { 00:10:06.966 "name": "BaseBdev2", 00:10:06.966 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:06.966 "is_configured": true, 00:10:06.966 "data_offset": 2048, 00:10:06.966 "data_size": 63488 00:10:06.966 } 00:10:06.966 ] 00:10:06.966 }' 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.966 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.223 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.223 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.223 "name": "raid_bdev1", 00:10:07.223 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:07.223 "strip_size_kb": 0, 00:10:07.223 "state": "online", 00:10:07.223 "raid_level": "raid1", 00:10:07.223 "superblock": true, 00:10:07.223 "num_base_bdevs": 2, 00:10:07.223 "num_base_bdevs_discovered": 2, 00:10:07.223 "num_base_bdevs_operational": 2, 00:10:07.223 "base_bdevs_list": [ 00:10:07.223 { 00:10:07.223 "name": "spare", 00:10:07.223 "uuid": "c894824d-2cb3-51f2-857a-2702b253a1e9", 00:10:07.223 "is_configured": true, 00:10:07.223 "data_offset": 2048, 00:10:07.223 "data_size": 63488 00:10:07.223 }, 00:10:07.223 { 00:10:07.223 "name": "BaseBdev2", 00:10:07.223 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:07.223 "is_configured": true, 00:10:07.223 "data_offset": 2048, 00:10:07.223 "data_size": 63488 00:10:07.223 } 00:10:07.223 ] 00:10:07.223 }' 00:10:07.223 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.223 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.480 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:07.480 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.480 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.480 [2024-10-01 14:34:58.956950] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.480 [2024-10-01 14:34:58.956976] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.480 [2024-10-01 14:34:58.957039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.480 [2024-10-01 14:34:58.957093] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.480 [2024-10-01 14:34:58.957101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:07.481 14:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:07.739 /dev/nbd0 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:07.739 1+0 records in 00:10:07.739 1+0 records out 00:10:07.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047562 s, 8.6 MB/s 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:07.739 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:07.996 /dev/nbd1 00:10:07.996 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:07.996 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:07.996 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:07.996 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:10:07.996 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:07.996 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:07.996 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:07.996 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:10:07.996 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:07.996 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:07.996 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:07.996 1+0 records in 00:10:07.996 1+0 records out 00:10:07.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022714 s, 18.0 MB/s 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:07.997 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:08.253 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:08.253 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:08.253 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:08.253 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:08.253 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:08.253 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:08.253 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:08.253 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:08.253 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:08.253 14:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.510 [2024-10-01 14:35:00.035458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:08.510 [2024-10-01 14:35:00.035506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.510 [2024-10-01 14:35:00.035524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:08.510 [2024-10-01 14:35:00.035531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.510 [2024-10-01 14:35:00.037376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.510 [2024-10-01 14:35:00.037406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:08.510 [2024-10-01 14:35:00.037478] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:08.510 [2024-10-01 14:35:00.037522] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:08.510 [2024-10-01 14:35:00.037628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.510 spare 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.510 [2024-10-01 14:35:00.137702] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:08.510 [2024-10-01 14:35:00.137751] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:08.510 [2024-10-01 14:35:00.138021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:10:08.510 [2024-10-01 14:35:00.138165] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:08.510 [2024-10-01 14:35:00.138172] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:08.510 [2024-10-01 14:35:00.138310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.510 "name": "raid_bdev1", 00:10:08.510 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:08.510 "strip_size_kb": 0, 00:10:08.510 "state": "online", 00:10:08.510 "raid_level": "raid1", 00:10:08.510 "superblock": true, 00:10:08.510 "num_base_bdevs": 2, 00:10:08.510 "num_base_bdevs_discovered": 2, 00:10:08.510 "num_base_bdevs_operational": 2, 00:10:08.510 "base_bdevs_list": [ 00:10:08.510 { 00:10:08.510 "name": "spare", 00:10:08.510 "uuid": "c894824d-2cb3-51f2-857a-2702b253a1e9", 00:10:08.510 "is_configured": true, 00:10:08.510 "data_offset": 2048, 00:10:08.510 "data_size": 63488 00:10:08.510 }, 00:10:08.510 { 00:10:08.510 "name": "BaseBdev2", 00:10:08.510 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:08.510 "is_configured": true, 00:10:08.510 "data_offset": 2048, 00:10:08.510 "data_size": 63488 00:10:08.510 } 00:10:08.510 ] 00:10:08.510 }' 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.510 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.769 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:08.769 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:08.769 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:08.769 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:08.769 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:08.769 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.769 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.769 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.769 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.769 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:09.026 "name": "raid_bdev1", 00:10:09.026 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:09.026 "strip_size_kb": 0, 00:10:09.026 "state": "online", 00:10:09.026 "raid_level": "raid1", 00:10:09.026 "superblock": true, 00:10:09.026 "num_base_bdevs": 2, 00:10:09.026 "num_base_bdevs_discovered": 2, 00:10:09.026 "num_base_bdevs_operational": 2, 00:10:09.026 "base_bdevs_list": [ 00:10:09.026 { 00:10:09.026 "name": "spare", 00:10:09.026 "uuid": "c894824d-2cb3-51f2-857a-2702b253a1e9", 00:10:09.026 "is_configured": true, 00:10:09.026 "data_offset": 2048, 00:10:09.026 "data_size": 63488 00:10:09.026 }, 00:10:09.026 { 00:10:09.026 "name": "BaseBdev2", 00:10:09.026 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:09.026 "is_configured": true, 00:10:09.026 "data_offset": 2048, 00:10:09.026 "data_size": 63488 00:10:09.026 } 00:10:09.026 ] 00:10:09.026 }' 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.026 [2024-10-01 14:35:00.571620] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.026 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.027 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.027 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.027 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.027 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.027 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.027 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.027 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.027 "name": "raid_bdev1", 00:10:09.027 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:09.027 "strip_size_kb": 0, 00:10:09.027 "state": "online", 00:10:09.027 "raid_level": "raid1", 00:10:09.027 "superblock": true, 00:10:09.027 "num_base_bdevs": 2, 00:10:09.027 "num_base_bdevs_discovered": 1, 00:10:09.027 "num_base_bdevs_operational": 1, 00:10:09.027 "base_bdevs_list": [ 00:10:09.027 { 00:10:09.027 "name": null, 00:10:09.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.027 "is_configured": false, 00:10:09.027 "data_offset": 0, 00:10:09.027 "data_size": 63488 00:10:09.027 }, 00:10:09.027 { 00:10:09.027 "name": "BaseBdev2", 00:10:09.027 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:09.027 "is_configured": true, 00:10:09.027 "data_offset": 2048, 00:10:09.027 "data_size": 63488 00:10:09.027 } 00:10:09.027 ] 00:10:09.027 }' 00:10:09.027 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.027 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.284 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:09.284 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.284 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.284 [2024-10-01 14:35:00.915716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:09.284 [2024-10-01 14:35:00.915859] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:09.284 [2024-10-01 14:35:00.915872] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:09.284 [2024-10-01 14:35:00.915900] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:09.284 [2024-10-01 14:35:00.924252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:10:09.284 14:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.284 14:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:10:09.284 [2024-10-01 14:35:00.925835] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:10.726 "name": "raid_bdev1", 00:10:10.726 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:10.726 "strip_size_kb": 0, 00:10:10.726 "state": "online", 00:10:10.726 "raid_level": "raid1", 00:10:10.726 "superblock": true, 00:10:10.726 "num_base_bdevs": 2, 00:10:10.726 "num_base_bdevs_discovered": 2, 00:10:10.726 "num_base_bdevs_operational": 2, 00:10:10.726 "process": { 00:10:10.726 "type": "rebuild", 00:10:10.726 "target": "spare", 00:10:10.726 "progress": { 00:10:10.726 "blocks": 20480, 00:10:10.726 "percent": 32 00:10:10.726 } 00:10:10.726 }, 00:10:10.726 "base_bdevs_list": [ 00:10:10.726 { 00:10:10.726 "name": "spare", 00:10:10.726 "uuid": "c894824d-2cb3-51f2-857a-2702b253a1e9", 00:10:10.726 "is_configured": true, 00:10:10.726 "data_offset": 2048, 00:10:10.726 "data_size": 63488 00:10:10.726 }, 00:10:10.726 { 00:10:10.726 "name": "BaseBdev2", 00:10:10.726 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:10.726 "is_configured": true, 00:10:10.726 "data_offset": 2048, 00:10:10.726 "data_size": 63488 00:10:10.726 } 00:10:10.726 ] 00:10:10.726 }' 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:10.726 14:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.726 [2024-10-01 14:35:02.032063] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:10.726 [2024-10-01 14:35:02.130971] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:10.726 [2024-10-01 14:35:02.131127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.726 [2024-10-01 14:35:02.131143] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:10.726 [2024-10-01 14:35:02.131151] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.726 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.726 "name": "raid_bdev1", 00:10:10.726 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:10.726 "strip_size_kb": 0, 00:10:10.726 "state": "online", 00:10:10.726 "raid_level": "raid1", 00:10:10.726 "superblock": true, 00:10:10.726 "num_base_bdevs": 2, 00:10:10.726 "num_base_bdevs_discovered": 1, 00:10:10.726 "num_base_bdevs_operational": 1, 00:10:10.726 "base_bdevs_list": [ 00:10:10.726 { 00:10:10.726 "name": null, 00:10:10.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.726 "is_configured": false, 00:10:10.726 "data_offset": 0, 00:10:10.726 "data_size": 63488 00:10:10.726 }, 00:10:10.726 { 00:10:10.726 "name": "BaseBdev2", 00:10:10.726 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:10.726 "is_configured": true, 00:10:10.726 "data_offset": 2048, 00:10:10.726 "data_size": 63488 00:10:10.726 } 00:10:10.726 ] 00:10:10.726 }' 00:10:10.727 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.727 14:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.985 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:10.985 14:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.985 14:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.985 [2024-10-01 14:35:02.462762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:10.985 [2024-10-01 14:35:02.462812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.985 [2024-10-01 14:35:02.462828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:10.985 [2024-10-01 14:35:02.462837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.985 [2024-10-01 14:35:02.463214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.985 [2024-10-01 14:35:02.463229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:10.985 [2024-10-01 14:35:02.463302] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:10.985 [2024-10-01 14:35:02.463314] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:10.985 [2024-10-01 14:35:02.463321] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:10.985 [2024-10-01 14:35:02.463338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:10.985 [2024-10-01 14:35:02.471509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:10:10.985 spare 00:10:10.985 14:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.985 14:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:10:10.985 [2024-10-01 14:35:02.473081] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:11.917 "name": "raid_bdev1", 00:10:11.917 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:11.917 "strip_size_kb": 0, 00:10:11.917 "state": "online", 00:10:11.917 "raid_level": "raid1", 00:10:11.917 "superblock": true, 00:10:11.917 "num_base_bdevs": 2, 00:10:11.917 "num_base_bdevs_discovered": 2, 00:10:11.917 "num_base_bdevs_operational": 2, 00:10:11.917 "process": { 00:10:11.917 "type": "rebuild", 00:10:11.917 "target": "spare", 00:10:11.917 "progress": { 00:10:11.917 "blocks": 20480, 00:10:11.917 "percent": 32 00:10:11.917 } 00:10:11.917 }, 00:10:11.917 "base_bdevs_list": [ 00:10:11.917 { 00:10:11.917 "name": "spare", 00:10:11.917 "uuid": "c894824d-2cb3-51f2-857a-2702b253a1e9", 00:10:11.917 "is_configured": true, 00:10:11.917 "data_offset": 2048, 00:10:11.917 "data_size": 63488 00:10:11.917 }, 00:10:11.917 { 00:10:11.917 "name": "BaseBdev2", 00:10:11.917 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:11.917 "is_configured": true, 00:10:11.917 "data_offset": 2048, 00:10:11.917 "data_size": 63488 00:10:11.917 } 00:10:11.917 ] 00:10:11.917 }' 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.917 [2024-10-01 14:35:03.575246] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:11.917 [2024-10-01 14:35:03.577907] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:11.917 [2024-10-01 14:35:03.578015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.917 [2024-10-01 14:35:03.578070] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:11.917 [2024-10-01 14:35:03.578330] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.917 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.174 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.174 14:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.174 14:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.174 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.174 14:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.174 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.174 "name": "raid_bdev1", 00:10:12.174 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:12.174 "strip_size_kb": 0, 00:10:12.174 "state": "online", 00:10:12.174 "raid_level": "raid1", 00:10:12.174 "superblock": true, 00:10:12.174 "num_base_bdevs": 2, 00:10:12.174 "num_base_bdevs_discovered": 1, 00:10:12.174 "num_base_bdevs_operational": 1, 00:10:12.174 "base_bdevs_list": [ 00:10:12.174 { 00:10:12.174 "name": null, 00:10:12.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.174 "is_configured": false, 00:10:12.174 "data_offset": 0, 00:10:12.174 "data_size": 63488 00:10:12.174 }, 00:10:12.174 { 00:10:12.174 "name": "BaseBdev2", 00:10:12.174 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:12.174 "is_configured": true, 00:10:12.174 "data_offset": 2048, 00:10:12.174 "data_size": 63488 00:10:12.174 } 00:10:12.174 ] 00:10:12.174 }' 00:10:12.174 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.174 14:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:12.432 "name": "raid_bdev1", 00:10:12.432 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:12.432 "strip_size_kb": 0, 00:10:12.432 "state": "online", 00:10:12.432 "raid_level": "raid1", 00:10:12.432 "superblock": true, 00:10:12.432 "num_base_bdevs": 2, 00:10:12.432 "num_base_bdevs_discovered": 1, 00:10:12.432 "num_base_bdevs_operational": 1, 00:10:12.432 "base_bdevs_list": [ 00:10:12.432 { 00:10:12.432 "name": null, 00:10:12.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.432 "is_configured": false, 00:10:12.432 "data_offset": 0, 00:10:12.432 "data_size": 63488 00:10:12.432 }, 00:10:12.432 { 00:10:12.432 "name": "BaseBdev2", 00:10:12.432 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:12.432 "is_configured": true, 00:10:12.432 "data_offset": 2048, 00:10:12.432 "data_size": 63488 00:10:12.432 } 00:10:12.432 ] 00:10:12.432 }' 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:12.432 14:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:12.432 14:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:12.432 14:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:10:12.432 14:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.432 14:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.432 14:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.432 14:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:12.432 14:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.432 14:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.432 [2024-10-01 14:35:04.018237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:12.432 [2024-10-01 14:35:04.018284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.432 [2024-10-01 14:35:04.018301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:12.432 [2024-10-01 14:35:04.018310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.432 [2024-10-01 14:35:04.018678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.432 [2024-10-01 14:35:04.018690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:12.432 [2024-10-01 14:35:04.018772] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:10:12.432 [2024-10-01 14:35:04.018784] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:10:12.432 [2024-10-01 14:35:04.018794] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:12.432 [2024-10-01 14:35:04.018802] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:10:12.432 BaseBdev1 00:10:12.432 14:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.432 14:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.364 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.694 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.694 "name": "raid_bdev1", 00:10:13.694 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:13.694 "strip_size_kb": 0, 00:10:13.694 "state": "online", 00:10:13.694 "raid_level": "raid1", 00:10:13.694 "superblock": true, 00:10:13.694 "num_base_bdevs": 2, 00:10:13.694 "num_base_bdevs_discovered": 1, 00:10:13.694 "num_base_bdevs_operational": 1, 00:10:13.694 "base_bdevs_list": [ 00:10:13.694 { 00:10:13.694 "name": null, 00:10:13.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.694 "is_configured": false, 00:10:13.694 "data_offset": 0, 00:10:13.695 "data_size": 63488 00:10:13.695 }, 00:10:13.695 { 00:10:13.695 "name": "BaseBdev2", 00:10:13.695 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:13.695 "is_configured": true, 00:10:13.695 "data_offset": 2048, 00:10:13.695 "data_size": 63488 00:10:13.695 } 00:10:13.695 ] 00:10:13.695 }' 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:13.695 "name": "raid_bdev1", 00:10:13.695 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:13.695 "strip_size_kb": 0, 00:10:13.695 "state": "online", 00:10:13.695 "raid_level": "raid1", 00:10:13.695 "superblock": true, 00:10:13.695 "num_base_bdevs": 2, 00:10:13.695 "num_base_bdevs_discovered": 1, 00:10:13.695 "num_base_bdevs_operational": 1, 00:10:13.695 "base_bdevs_list": [ 00:10:13.695 { 00:10:13.695 "name": null, 00:10:13.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.695 "is_configured": false, 00:10:13.695 "data_offset": 0, 00:10:13.695 "data_size": 63488 00:10:13.695 }, 00:10:13.695 { 00:10:13.695 "name": "BaseBdev2", 00:10:13.695 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:13.695 "is_configured": true, 00:10:13.695 "data_offset": 2048, 00:10:13.695 "data_size": 63488 00:10:13.695 } 00:10:13.695 ] 00:10:13.695 }' 00:10:13.695 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.953 [2024-10-01 14:35:05.430549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.953 [2024-10-01 14:35:05.430667] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:10:13.953 [2024-10-01 14:35:05.430678] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:13.953 request: 00:10:13.953 { 00:10:13.953 "base_bdev": "BaseBdev1", 00:10:13.953 "raid_bdev": "raid_bdev1", 00:10:13.953 "method": "bdev_raid_add_base_bdev", 00:10:13.953 "req_id": 1 00:10:13.953 } 00:10:13.953 Got JSON-RPC error response 00:10:13.953 response: 00:10:13.953 { 00:10:13.953 "code": -22, 00:10:13.953 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:10:13.953 } 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:13.953 14:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:10:14.930 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:14.930 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.930 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.930 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.930 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.930 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:14.930 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.931 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.931 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.931 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.931 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.931 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.931 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.931 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.931 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.931 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.931 "name": "raid_bdev1", 00:10:14.931 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:14.931 "strip_size_kb": 0, 00:10:14.931 "state": "online", 00:10:14.931 "raid_level": "raid1", 00:10:14.931 "superblock": true, 00:10:14.931 "num_base_bdevs": 2, 00:10:14.931 "num_base_bdevs_discovered": 1, 00:10:14.931 "num_base_bdevs_operational": 1, 00:10:14.931 "base_bdevs_list": [ 00:10:14.931 { 00:10:14.931 "name": null, 00:10:14.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.931 "is_configured": false, 00:10:14.931 "data_offset": 0, 00:10:14.931 "data_size": 63488 00:10:14.931 }, 00:10:14.931 { 00:10:14.931 "name": "BaseBdev2", 00:10:14.931 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:14.931 "is_configured": true, 00:10:14.931 "data_offset": 2048, 00:10:14.931 "data_size": 63488 00:10:14.931 } 00:10:14.931 ] 00:10:14.931 }' 00:10:14.931 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.931 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:15.188 "name": "raid_bdev1", 00:10:15.188 "uuid": "db022359-e840-4832-9b96-101cf640341e", 00:10:15.188 "strip_size_kb": 0, 00:10:15.188 "state": "online", 00:10:15.188 "raid_level": "raid1", 00:10:15.188 "superblock": true, 00:10:15.188 "num_base_bdevs": 2, 00:10:15.188 "num_base_bdevs_discovered": 1, 00:10:15.188 "num_base_bdevs_operational": 1, 00:10:15.188 "base_bdevs_list": [ 00:10:15.188 { 00:10:15.188 "name": null, 00:10:15.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.188 "is_configured": false, 00:10:15.188 "data_offset": 0, 00:10:15.188 "data_size": 63488 00:10:15.188 }, 00:10:15.188 { 00:10:15.188 "name": "BaseBdev2", 00:10:15.188 "uuid": "399b2df0-292b-597b-9d79-19305d2598c8", 00:10:15.188 "is_configured": true, 00:10:15.188 "data_offset": 2048, 00:10:15.188 "data_size": 63488 00:10:15.188 } 00:10:15.188 ] 00:10:15.188 }' 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 73912 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73912 ']' 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 73912 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73912 00:10:15.188 killing process with pid 73912 00:10:15.188 Received shutdown signal, test time was about 60.000000 seconds 00:10:15.188 00:10:15.188 Latency(us) 00:10:15.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.188 =================================================================================================================== 00:10:15.188 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73912' 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 73912 00:10:15.188 [2024-10-01 14:35:06.844674] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.188 14:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 73912 00:10:15.188 [2024-10-01 14:35:06.844777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.188 [2024-10-01 14:35:06.844815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.188 [2024-10-01 14:35:06.844824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:15.445 [2024-10-01 14:35:06.992037] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.009 ************************************ 00:10:16.009 END TEST raid_rebuild_test_sb 00:10:16.009 ************************************ 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:10:16.009 00:10:16.009 real 0m21.457s 00:10:16.009 user 0m24.899s 00:10:16.009 sys 0m3.193s 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.009 14:35:07 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:10:16.009 14:35:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:16.009 14:35:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.009 14:35:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.009 ************************************ 00:10:16.009 START TEST raid_rebuild_test_io 00:10:16.009 ************************************ 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:16.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74624 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74624 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 74624 ']' 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:16.009 14:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:16.267 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:16.267 Zero copy mechanism will not be used. 00:10:16.267 [2024-10-01 14:35:07.753931] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:10:16.267 [2024-10-01 14:35:07.754051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74624 ] 00:10:16.267 [2024-10-01 14:35:07.895017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.525 [2024-10-01 14:35:08.043803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.525 [2024-10-01 14:35:08.153230] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.525 [2024-10-01 14:35:08.153372] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 BaseBdev1_malloc 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 [2024-10-01 14:35:08.582350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:17.090 [2024-10-01 14:35:08.582406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.090 [2024-10-01 14:35:08.582423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:17.090 [2024-10-01 14:35:08.582435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.090 [2024-10-01 14:35:08.584149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.090 [2024-10-01 14:35:08.584282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:17.090 BaseBdev1 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 BaseBdev2_malloc 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 [2024-10-01 14:35:08.630251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:17.090 [2024-10-01 14:35:08.630434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.090 [2024-10-01 14:35:08.630455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:17.090 [2024-10-01 14:35:08.630466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.090 [2024-10-01 14:35:08.632178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.090 [2024-10-01 14:35:08.632208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:17.090 BaseBdev2 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 spare_malloc 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 spare_delay 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 [2024-10-01 14:35:08.669546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:17.090 [2024-10-01 14:35:08.669675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.090 [2024-10-01 14:35:08.669693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:17.090 [2024-10-01 14:35:08.669701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.090 [2024-10-01 14:35:08.671440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.090 [2024-10-01 14:35:08.671472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:17.090 spare 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 [2024-10-01 14:35:08.677593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.090 [2024-10-01 14:35:08.679083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.090 [2024-10-01 14:35:08.679151] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:17.090 [2024-10-01 14:35:08.679160] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:17.090 [2024-10-01 14:35:08.679379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:17.090 [2024-10-01 14:35:08.679489] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:17.090 [2024-10-01 14:35:08.679496] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:17.090 [2024-10-01 14:35:08.679603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.090 "name": "raid_bdev1", 00:10:17.090 "uuid": "552f38b8-c110-4b58-ab0a-c6d9b8f31afa", 00:10:17.090 "strip_size_kb": 0, 00:10:17.090 "state": "online", 00:10:17.090 "raid_level": "raid1", 00:10:17.090 "superblock": false, 00:10:17.090 "num_base_bdevs": 2, 00:10:17.090 "num_base_bdevs_discovered": 2, 00:10:17.090 "num_base_bdevs_operational": 2, 00:10:17.090 "base_bdevs_list": [ 00:10:17.090 { 00:10:17.090 "name": "BaseBdev1", 00:10:17.090 "uuid": "05922723-15b0-5d71-b2cc-1dbb57d77aa9", 00:10:17.090 "is_configured": true, 00:10:17.090 "data_offset": 0, 00:10:17.090 "data_size": 65536 00:10:17.090 }, 00:10:17.090 { 00:10:17.090 "name": "BaseBdev2", 00:10:17.090 "uuid": "7c2bf88a-9ea8-51d4-9ca7-ea7d41d3eccc", 00:10:17.090 "is_configured": true, 00:10:17.090 "data_offset": 0, 00:10:17.090 "data_size": 65536 00:10:17.090 } 00:10:17.090 ] 00:10:17.090 }' 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.090 14:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.349 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.349 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:17.349 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.349 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.349 [2024-10-01 14:35:09.017900] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.349 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.606 [2024-10-01 14:35:09.081648] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.606 "name": "raid_bdev1", 00:10:17.606 "uuid": "552f38b8-c110-4b58-ab0a-c6d9b8f31afa", 00:10:17.606 "strip_size_kb": 0, 00:10:17.606 "state": "online", 00:10:17.606 "raid_level": "raid1", 00:10:17.606 "superblock": false, 00:10:17.606 "num_base_bdevs": 2, 00:10:17.606 "num_base_bdevs_discovered": 1, 00:10:17.606 "num_base_bdevs_operational": 1, 00:10:17.606 "base_bdevs_list": [ 00:10:17.606 { 00:10:17.606 "name": null, 00:10:17.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.606 "is_configured": false, 00:10:17.606 "data_offset": 0, 00:10:17.606 "data_size": 65536 00:10:17.606 }, 00:10:17.606 { 00:10:17.606 "name": "BaseBdev2", 00:10:17.606 "uuid": "7c2bf88a-9ea8-51d4-9ca7-ea7d41d3eccc", 00:10:17.606 "is_configured": true, 00:10:17.606 "data_offset": 0, 00:10:17.606 "data_size": 65536 00:10:17.606 } 00:10:17.606 ] 00:10:17.606 }' 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.606 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.606 [2024-10-01 14:35:09.162188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:17.606 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:17.606 Zero copy mechanism will not be used. 00:10:17.606 Running I/O for 60 seconds... 00:10:17.864 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:17.864 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.864 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.864 [2024-10-01 14:35:09.387368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:17.864 14:35:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.864 14:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:17.864 [2024-10-01 14:35:09.426104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:10:17.864 [2024-10-01 14:35:09.427754] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:17.864 [2024-10-01 14:35:09.533978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:17.864 [2024-10-01 14:35:09.534326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:18.120 [2024-10-01 14:35:09.752526] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:18.120 [2024-10-01 14:35:09.752750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:18.378 [2024-10-01 14:35:09.983927] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:18.378 [2024-10-01 14:35:09.984439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:18.635 [2024-10-01 14:35:10.097199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:18.892 181.00 IOPS, 543.00 MiB/s [2024-10-01 14:35:10.412052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:10:18.892 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:18.892 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:18.892 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:18.892 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:18.892 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:18.892 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.892 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.892 14:35:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.892 14:35:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:18.892 14:35:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.892 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:18.892 "name": "raid_bdev1", 00:10:18.892 "uuid": "552f38b8-c110-4b58-ab0a-c6d9b8f31afa", 00:10:18.892 "strip_size_kb": 0, 00:10:18.892 "state": "online", 00:10:18.892 "raid_level": "raid1", 00:10:18.892 "superblock": false, 00:10:18.892 "num_base_bdevs": 2, 00:10:18.892 "num_base_bdevs_discovered": 2, 00:10:18.892 "num_base_bdevs_operational": 2, 00:10:18.892 "process": { 00:10:18.892 "type": "rebuild", 00:10:18.892 "target": "spare", 00:10:18.892 "progress": { 00:10:18.892 "blocks": 14336, 00:10:18.892 "percent": 21 00:10:18.892 } 00:10:18.892 }, 00:10:18.892 "base_bdevs_list": [ 00:10:18.892 { 00:10:18.892 "name": "spare", 00:10:18.892 "uuid": "d27b9aa6-6e35-5f71-b1db-a977f630a785", 00:10:18.892 "is_configured": true, 00:10:18.892 "data_offset": 0, 00:10:18.892 "data_size": 65536 00:10:18.892 }, 00:10:18.892 { 00:10:18.892 "name": "BaseBdev2", 00:10:18.892 "uuid": "7c2bf88a-9ea8-51d4-9ca7-ea7d41d3eccc", 00:10:18.893 "is_configured": true, 00:10:18.893 "data_offset": 0, 00:10:18.893 "data_size": 65536 00:10:18.893 } 00:10:18.893 ] 00:10:18.893 }' 00:10:18.893 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:18.893 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:18.893 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:18.893 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:18.893 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:18.893 14:35:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.893 14:35:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:18.893 [2024-10-01 14:35:10.524392] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:19.151 [2024-10-01 14:35:10.630166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:10:19.151 [2024-10-01 14:35:10.694110] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:19.151 [2024-10-01 14:35:10.706435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.151 [2024-10-01 14:35:10.706563] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:19.151 [2024-10-01 14:35:10.706590] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:19.151 [2024-10-01 14:35:10.721166] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.151 "name": "raid_bdev1", 00:10:19.151 "uuid": "552f38b8-c110-4b58-ab0a-c6d9b8f31afa", 00:10:19.151 "strip_size_kb": 0, 00:10:19.151 "state": "online", 00:10:19.151 "raid_level": "raid1", 00:10:19.151 "superblock": false, 00:10:19.151 "num_base_bdevs": 2, 00:10:19.151 "num_base_bdevs_discovered": 1, 00:10:19.151 "num_base_bdevs_operational": 1, 00:10:19.151 "base_bdevs_list": [ 00:10:19.151 { 00:10:19.151 "name": null, 00:10:19.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.151 "is_configured": false, 00:10:19.151 "data_offset": 0, 00:10:19.151 "data_size": 65536 00:10:19.151 }, 00:10:19.151 { 00:10:19.151 "name": "BaseBdev2", 00:10:19.151 "uuid": "7c2bf88a-9ea8-51d4-9ca7-ea7d41d3eccc", 00:10:19.151 "is_configured": true, 00:10:19.151 "data_offset": 0, 00:10:19.151 "data_size": 65536 00:10:19.151 } 00:10:19.151 ] 00:10:19.151 }' 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.151 14:35:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:19.410 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:19.410 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:19.410 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:19.410 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:19.410 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:19.410 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.410 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.410 14:35:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.410 14:35:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:19.410 14:35:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.410 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:19.410 "name": "raid_bdev1", 00:10:19.410 "uuid": "552f38b8-c110-4b58-ab0a-c6d9b8f31afa", 00:10:19.410 "strip_size_kb": 0, 00:10:19.410 "state": "online", 00:10:19.410 "raid_level": "raid1", 00:10:19.410 "superblock": false, 00:10:19.410 "num_base_bdevs": 2, 00:10:19.410 "num_base_bdevs_discovered": 1, 00:10:19.410 "num_base_bdevs_operational": 1, 00:10:19.410 "base_bdevs_list": [ 00:10:19.410 { 00:10:19.410 "name": null, 00:10:19.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.410 "is_configured": false, 00:10:19.410 "data_offset": 0, 00:10:19.410 "data_size": 65536 00:10:19.410 }, 00:10:19.410 { 00:10:19.410 "name": "BaseBdev2", 00:10:19.410 "uuid": "7c2bf88a-9ea8-51d4-9ca7-ea7d41d3eccc", 00:10:19.410 "is_configured": true, 00:10:19.410 "data_offset": 0, 00:10:19.410 "data_size": 65536 00:10:19.410 } 00:10:19.410 ] 00:10:19.410 }' 00:10:19.410 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:19.680 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:19.680 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:19.680 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:19.680 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:19.680 14:35:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.680 14:35:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:19.680 [2024-10-01 14:35:11.149412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:19.680 14:35:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.680 14:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:19.680 195.50 IOPS, 586.50 MiB/s [2024-10-01 14:35:11.189527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:19.680 [2024-10-01 14:35:11.191181] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:19.680 [2024-10-01 14:35:11.303483] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:19.680 [2024-10-01 14:35:11.303948] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:19.974 [2024-10-01 14:35:11.505389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:19.974 [2024-10-01 14:35:11.505769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:20.231 [2024-10-01 14:35:11.746534] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:20.231 [2024-10-01 14:35:11.747056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:20.489 [2024-10-01 14:35:11.960238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:20.746 168.33 IOPS, 505.00 MiB/s 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:20.746 "name": "raid_bdev1", 00:10:20.746 "uuid": "552f38b8-c110-4b58-ab0a-c6d9b8f31afa", 00:10:20.746 "strip_size_kb": 0, 00:10:20.746 "state": "online", 00:10:20.746 "raid_level": "raid1", 00:10:20.746 "superblock": false, 00:10:20.746 "num_base_bdevs": 2, 00:10:20.746 "num_base_bdevs_discovered": 2, 00:10:20.746 "num_base_bdevs_operational": 2, 00:10:20.746 "process": { 00:10:20.746 "type": "rebuild", 00:10:20.746 "target": "spare", 00:10:20.746 "progress": { 00:10:20.746 "blocks": 10240, 00:10:20.746 "percent": 15 00:10:20.746 } 00:10:20.746 }, 00:10:20.746 "base_bdevs_list": [ 00:10:20.746 { 00:10:20.746 "name": "spare", 00:10:20.746 "uuid": "d27b9aa6-6e35-5f71-b1db-a977f630a785", 00:10:20.746 "is_configured": true, 00:10:20.746 "data_offset": 0, 00:10:20.746 "data_size": 65536 00:10:20.746 }, 00:10:20.746 { 00:10:20.746 "name": "BaseBdev2", 00:10:20.746 "uuid": "7c2bf88a-9ea8-51d4-9ca7-ea7d41d3eccc", 00:10:20.746 "is_configured": true, 00:10:20.746 "data_offset": 0, 00:10:20.746 "data_size": 65536 00:10:20.746 } 00:10:20.746 ] 00:10:20.746 }' 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=332 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:20.746 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:20.747 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:20.747 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:20.747 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.747 14:35:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.747 14:35:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:20.747 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.747 14:35:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.747 [2024-10-01 14:35:12.292732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:10:20.747 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:20.747 "name": "raid_bdev1", 00:10:20.747 "uuid": "552f38b8-c110-4b58-ab0a-c6d9b8f31afa", 00:10:20.747 "strip_size_kb": 0, 00:10:20.747 "state": "online", 00:10:20.747 "raid_level": "raid1", 00:10:20.747 "superblock": false, 00:10:20.747 "num_base_bdevs": 2, 00:10:20.747 "num_base_bdevs_discovered": 2, 00:10:20.747 "num_base_bdevs_operational": 2, 00:10:20.747 "process": { 00:10:20.747 "type": "rebuild", 00:10:20.747 "target": "spare", 00:10:20.747 "progress": { 00:10:20.747 "blocks": 12288, 00:10:20.747 "percent": 18 00:10:20.747 } 00:10:20.747 }, 00:10:20.747 "base_bdevs_list": [ 00:10:20.747 { 00:10:20.747 "name": "spare", 00:10:20.747 "uuid": "d27b9aa6-6e35-5f71-b1db-a977f630a785", 00:10:20.747 "is_configured": true, 00:10:20.747 "data_offset": 0, 00:10:20.747 "data_size": 65536 00:10:20.747 }, 00:10:20.747 { 00:10:20.747 "name": "BaseBdev2", 00:10:20.747 "uuid": "7c2bf88a-9ea8-51d4-9ca7-ea7d41d3eccc", 00:10:20.747 "is_configured": true, 00:10:20.747 "data_offset": 0, 00:10:20.747 "data_size": 65536 00:10:20.747 } 00:10:20.747 ] 00:10:20.747 }' 00:10:20.747 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:20.747 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:20.747 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:20.747 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:20.747 14:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:20.747 [2024-10-01 14:35:12.393753] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:10:20.747 [2024-10-01 14:35:12.393969] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:10:21.312 [2024-10-01 14:35:12.721126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:10:21.312 [2024-10-01 14:35:12.940095] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:10:21.827 141.00 IOPS, 423.00 MiB/s 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:21.827 [2024-10-01 14:35:13.374269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:10:21.827 [2024-10-01 14:35:13.374557] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:21.827 "name": "raid_bdev1", 00:10:21.827 "uuid": "552f38b8-c110-4b58-ab0a-c6d9b8f31afa", 00:10:21.827 "strip_size_kb": 0, 00:10:21.827 "state": "online", 00:10:21.827 "raid_level": "raid1", 00:10:21.827 "superblock": false, 00:10:21.827 "num_base_bdevs": 2, 00:10:21.827 "num_base_bdevs_discovered": 2, 00:10:21.827 "num_base_bdevs_operational": 2, 00:10:21.827 "process": { 00:10:21.827 "type": "rebuild", 00:10:21.827 "target": "spare", 00:10:21.827 "progress": { 00:10:21.827 "blocks": 28672, 00:10:21.827 "percent": 43 00:10:21.827 } 00:10:21.827 }, 00:10:21.827 "base_bdevs_list": [ 00:10:21.827 { 00:10:21.827 "name": "spare", 00:10:21.827 "uuid": "d27b9aa6-6e35-5f71-b1db-a977f630a785", 00:10:21.827 "is_configured": true, 00:10:21.827 "data_offset": 0, 00:10:21.827 "data_size": 65536 00:10:21.827 }, 00:10:21.827 { 00:10:21.827 "name": "BaseBdev2", 00:10:21.827 "uuid": "7c2bf88a-9ea8-51d4-9ca7-ea7d41d3eccc", 00:10:21.827 "is_configured": true, 00:10:21.827 "data_offset": 0, 00:10:21.827 "data_size": 65536 00:10:21.827 } 00:10:21.827 ] 00:10:21.827 }' 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:21.827 14:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:22.085 [2024-10-01 14:35:13.717677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:10:22.343 [2024-10-01 14:35:13.829189] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:10:22.858 120.00 IOPS, 360.00 MiB/s 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:22.858 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:22.858 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:22.858 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:22.858 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:22.858 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:22.858 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.858 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.858 14:35:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.858 14:35:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:22.858 14:35:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.858 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:22.858 "name": "raid_bdev1", 00:10:22.858 "uuid": "552f38b8-c110-4b58-ab0a-c6d9b8f31afa", 00:10:22.858 "strip_size_kb": 0, 00:10:22.858 "state": "online", 00:10:22.858 "raid_level": "raid1", 00:10:22.858 "superblock": false, 00:10:22.858 "num_base_bdevs": 2, 00:10:22.858 "num_base_bdevs_discovered": 2, 00:10:22.858 "num_base_bdevs_operational": 2, 00:10:22.858 "process": { 00:10:22.858 "type": "rebuild", 00:10:22.858 "target": "spare", 00:10:22.858 "progress": { 00:10:22.858 "blocks": 43008, 00:10:22.858 "percent": 65 00:10:22.858 } 00:10:22.858 }, 00:10:22.858 "base_bdevs_list": [ 00:10:22.858 { 00:10:22.858 "name": "spare", 00:10:22.858 "uuid": "d27b9aa6-6e35-5f71-b1db-a977f630a785", 00:10:22.858 "is_configured": true, 00:10:22.858 "data_offset": 0, 00:10:22.858 "data_size": 65536 00:10:22.858 }, 00:10:22.858 { 00:10:22.858 "name": "BaseBdev2", 00:10:22.858 "uuid": "7c2bf88a-9ea8-51d4-9ca7-ea7d41d3eccc", 00:10:22.858 "is_configured": true, 00:10:22.858 "data_offset": 0, 00:10:22.858 "data_size": 65536 00:10:22.858 } 00:10:22.858 ] 00:10:22.858 }' 00:10:22.858 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:23.115 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:23.115 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:23.115 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:23.115 14:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:23.373 [2024-10-01 14:35:14.819218] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:10:23.630 108.33 IOPS, 325.00 MiB/s [2024-10-01 14:35:15.246141] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:10:23.887 [2024-10-01 14:35:15.566882] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:24.145 "name": "raid_bdev1", 00:10:24.145 "uuid": "552f38b8-c110-4b58-ab0a-c6d9b8f31afa", 00:10:24.145 "strip_size_kb": 0, 00:10:24.145 "state": "online", 00:10:24.145 "raid_level": "raid1", 00:10:24.145 "superblock": false, 00:10:24.145 "num_base_bdevs": 2, 00:10:24.145 "num_base_bdevs_discovered": 2, 00:10:24.145 "num_base_bdevs_operational": 2, 00:10:24.145 "process": { 00:10:24.145 "type": "rebuild", 00:10:24.145 "target": "spare", 00:10:24.145 "progress": { 00:10:24.145 "blocks": 65536, 00:10:24.145 "percent": 100 00:10:24.145 } 00:10:24.145 }, 00:10:24.145 "base_bdevs_list": [ 00:10:24.145 { 00:10:24.145 "name": "spare", 00:10:24.145 "uuid": "d27b9aa6-6e35-5f71-b1db-a977f630a785", 00:10:24.145 "is_configured": true, 00:10:24.145 "data_offset": 0, 00:10:24.145 "data_size": 65536 00:10:24.145 }, 00:10:24.145 { 00:10:24.145 "name": "BaseBdev2", 00:10:24.145 "uuid": "7c2bf88a-9ea8-51d4-9ca7-ea7d41d3eccc", 00:10:24.145 "is_configured": true, 00:10:24.145 "data_offset": 0, 00:10:24.145 "data_size": 65536 00:10:24.145 } 00:10:24.145 ] 00:10:24.145 }' 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:24.145 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:24.145 [2024-10-01 14:35:15.666899] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:24.146 [2024-10-01 14:35:15.668590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.146 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:24.146 14:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:25.051 97.43 IOPS, 292.29 MiB/s 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:25.051 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:25.051 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:25.051 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:25.051 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:25.051 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:25.051 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.051 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.051 14:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.051 14:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:25.051 14:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.051 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:25.051 "name": "raid_bdev1", 00:10:25.051 "uuid": "552f38b8-c110-4b58-ab0a-c6d9b8f31afa", 00:10:25.051 "strip_size_kb": 0, 00:10:25.051 "state": "online", 00:10:25.051 "raid_level": "raid1", 00:10:25.051 "superblock": false, 00:10:25.051 "num_base_bdevs": 2, 00:10:25.051 "num_base_bdevs_discovered": 2, 00:10:25.051 "num_base_bdevs_operational": 2, 00:10:25.051 "base_bdevs_list": [ 00:10:25.051 { 00:10:25.051 "name": "spare", 00:10:25.051 "uuid": "d27b9aa6-6e35-5f71-b1db-a977f630a785", 00:10:25.051 "is_configured": true, 00:10:25.051 "data_offset": 0, 00:10:25.051 "data_size": 65536 00:10:25.051 }, 00:10:25.051 { 00:10:25.051 "name": "BaseBdev2", 00:10:25.051 "uuid": "7c2bf88a-9ea8-51d4-9ca7-ea7d41d3eccc", 00:10:25.051 "is_configured": true, 00:10:25.051 "data_offset": 0, 00:10:25.051 "data_size": 65536 00:10:25.051 } 00:10:25.051 ] 00:10:25.051 }' 00:10:25.051 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:25.309 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:25.309 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:25.309 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:25.309 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:10:25.309 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:25.309 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:25.310 "name": "raid_bdev1", 00:10:25.310 "uuid": "552f38b8-c110-4b58-ab0a-c6d9b8f31afa", 00:10:25.310 "strip_size_kb": 0, 00:10:25.310 "state": "online", 00:10:25.310 "raid_level": "raid1", 00:10:25.310 "superblock": false, 00:10:25.310 "num_base_bdevs": 2, 00:10:25.310 "num_base_bdevs_discovered": 2, 00:10:25.310 "num_base_bdevs_operational": 2, 00:10:25.310 "base_bdevs_list": [ 00:10:25.310 { 00:10:25.310 "name": "spare", 00:10:25.310 "uuid": "d27b9aa6-6e35-5f71-b1db-a977f630a785", 00:10:25.310 "is_configured": true, 00:10:25.310 "data_offset": 0, 00:10:25.310 "data_size": 65536 00:10:25.310 }, 00:10:25.310 { 00:10:25.310 "name": "BaseBdev2", 00:10:25.310 "uuid": "7c2bf88a-9ea8-51d4-9ca7-ea7d41d3eccc", 00:10:25.310 "is_configured": true, 00:10:25.310 "data_offset": 0, 00:10:25.310 "data_size": 65536 00:10:25.310 } 00:10:25.310 ] 00:10:25.310 }' 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.310 "name": "raid_bdev1", 00:10:25.310 "uuid": "552f38b8-c110-4b58-ab0a-c6d9b8f31afa", 00:10:25.310 "strip_size_kb": 0, 00:10:25.310 "state": "online", 00:10:25.310 "raid_level": "raid1", 00:10:25.310 "superblock": false, 00:10:25.310 "num_base_bdevs": 2, 00:10:25.310 "num_base_bdevs_discovered": 2, 00:10:25.310 "num_base_bdevs_operational": 2, 00:10:25.310 "base_bdevs_list": [ 00:10:25.310 { 00:10:25.310 "name": "spare", 00:10:25.310 "uuid": "d27b9aa6-6e35-5f71-b1db-a977f630a785", 00:10:25.310 "is_configured": true, 00:10:25.310 "data_offset": 0, 00:10:25.310 "data_size": 65536 00:10:25.310 }, 00:10:25.310 { 00:10:25.310 "name": "BaseBdev2", 00:10:25.310 "uuid": "7c2bf88a-9ea8-51d4-9ca7-ea7d41d3eccc", 00:10:25.310 "is_configured": true, 00:10:25.310 "data_offset": 0, 00:10:25.310 "data_size": 65536 00:10:25.310 } 00:10:25.310 ] 00:10:25.310 }' 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.310 14:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:25.569 91.50 IOPS, 274.50 MiB/s 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.569 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.569 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:25.569 [2024-10-01 14:35:17.189364] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.569 [2024-10-01 14:35:17.189472] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.827 00:10:25.827 Latency(us) 00:10:25.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.827 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:10:25.827 raid_bdev1 : 8.12 90.62 271.87 0.00 0.00 15857.86 237.88 108083.99 00:10:25.827 =================================================================================================================== 00:10:25.827 Total : 90.62 271.87 0.00 0.00 15857.86 237.88 108083.99 00:10:25.827 [2024-10-01 14:35:17.297729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.827 [2024-10-01 14:35:17.297868] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.827 [2024-10-01 14:35:17.297954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.827 [2024-10-01 14:35:17.298131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:25.827 { 00:10:25.827 "results": [ 00:10:25.827 { 00:10:25.827 "job": "raid_bdev1", 00:10:25.827 "core_mask": "0x1", 00:10:25.827 "workload": "randrw", 00:10:25.827 "percentage": 50, 00:10:25.827 "status": "finished", 00:10:25.827 "queue_depth": 2, 00:10:25.827 "io_size": 3145728, 00:10:25.827 "runtime": 8.121387, 00:10:25.827 "iops": 90.62491419261266, 00:10:25.827 "mibps": 271.874742577838, 00:10:25.827 "io_failed": 0, 00:10:25.827 "io_timeout": 0, 00:10:25.827 "avg_latency_us": 15857.863277591974, 00:10:25.827 "min_latency_us": 237.8830769230769, 00:10:25.827 "max_latency_us": 108083.9876923077 00:10:25.827 } 00:10:25.827 ], 00:10:25.827 "core_count": 1 00:10:25.827 } 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:25.827 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:10:26.085 /dev/nbd0 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:26.085 1+0 records in 00:10:26.085 1+0 records out 00:10:26.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200003 s, 20.5 MB/s 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:10:26.085 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:26.086 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:10:26.086 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:26.086 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:10:26.086 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:26.086 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:26.086 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:10:26.343 /dev/nbd1 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:26.343 1+0 records in 00:10:26.343 1+0 records out 00:10:26.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208739 s, 19.6 MB/s 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:26.343 14:35:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:26.600 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 74624 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 74624 ']' 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 74624 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74624 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74624' 00:10:26.856 killing process with pid 74624 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 74624 00:10:26.856 Received shutdown signal, test time was about 9.220995 seconds 00:10:26.856 00:10:26.856 Latency(us) 00:10:26.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.856 =================================================================================================================== 00:10:26.856 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:26.856 [2024-10-01 14:35:18.384958] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.856 14:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 74624 00:10:26.856 [2024-10-01 14:35:18.499136] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:10:27.789 00:10:27.789 real 0m11.498s 00:10:27.789 user 0m14.012s 00:10:27.789 sys 0m1.052s 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.789 ************************************ 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:10:27.789 END TEST raid_rebuild_test_io 00:10:27.789 ************************************ 00:10:27.789 14:35:19 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:10:27.789 14:35:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:27.789 14:35:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.789 14:35:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.789 ************************************ 00:10:27.789 START TEST raid_rebuild_test_sb_io 00:10:27.789 ************************************ 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=75002 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 75002 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 75002 ']' 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:27.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:27.789 14:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:27.789 [2024-10-01 14:35:19.292026] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:10:27.789 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:27.789 Zero copy mechanism will not be used. 00:10:27.789 [2024-10-01 14:35:19.292147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75002 ] 00:10:27.789 [2024-10-01 14:35:19.435745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.046 [2024-10-01 14:35:19.615458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.304 [2024-10-01 14:35:19.750796] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.304 [2024-10-01 14:35:19.750833] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.561 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:28.562 BaseBdev1_malloc 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:28.562 [2024-10-01 14:35:20.174380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:28.562 [2024-10-01 14:35:20.174441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.562 [2024-10-01 14:35:20.174462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:28.562 [2024-10-01 14:35:20.174475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.562 [2024-10-01 14:35:20.176576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.562 [2024-10-01 14:35:20.176615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:28.562 BaseBdev1 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:28.562 BaseBdev2_malloc 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:28.562 [2024-10-01 14:35:20.223474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:28.562 [2024-10-01 14:35:20.223531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.562 [2024-10-01 14:35:20.223548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:28.562 [2024-10-01 14:35:20.223559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.562 [2024-10-01 14:35:20.225677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.562 [2024-10-01 14:35:20.225724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:28.562 BaseBdev2 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.562 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:28.819 spare_malloc 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:28.819 spare_delay 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:28.819 [2024-10-01 14:35:20.268131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:28.819 [2024-10-01 14:35:20.268198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.819 [2024-10-01 14:35:20.268217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:28.819 [2024-10-01 14:35:20.268228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.819 [2024-10-01 14:35:20.270422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.819 [2024-10-01 14:35:20.270466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:28.819 spare 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:28.819 [2024-10-01 14:35:20.276169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.819 [2024-10-01 14:35:20.278007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.819 [2024-10-01 14:35:20.278166] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:28.819 [2024-10-01 14:35:20.278187] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:28.819 [2024-10-01 14:35:20.278467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:28.819 [2024-10-01 14:35:20.278620] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:28.819 [2024-10-01 14:35:20.278636] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:28.819 [2024-10-01 14:35:20.278786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.819 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.819 "name": "raid_bdev1", 00:10:28.819 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:28.819 "strip_size_kb": 0, 00:10:28.819 "state": "online", 00:10:28.819 "raid_level": "raid1", 00:10:28.819 "superblock": true, 00:10:28.819 "num_base_bdevs": 2, 00:10:28.819 "num_base_bdevs_discovered": 2, 00:10:28.819 "num_base_bdevs_operational": 2, 00:10:28.819 "base_bdevs_list": [ 00:10:28.819 { 00:10:28.819 "name": "BaseBdev1", 00:10:28.819 "uuid": "c76f6793-c686-5dbf-8de2-40f1e92cbb7a", 00:10:28.819 "is_configured": true, 00:10:28.819 "data_offset": 2048, 00:10:28.819 "data_size": 63488 00:10:28.819 }, 00:10:28.819 { 00:10:28.819 "name": "BaseBdev2", 00:10:28.820 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:28.820 "is_configured": true, 00:10:28.820 "data_offset": 2048, 00:10:28.820 "data_size": 63488 00:10:28.820 } 00:10:28.820 ] 00:10:28.820 }' 00:10:28.820 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.820 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:29.078 [2024-10-01 14:35:20.580514] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:29.078 [2024-10-01 14:35:20.644220] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.078 "name": "raid_bdev1", 00:10:29.078 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:29.078 "strip_size_kb": 0, 00:10:29.078 "state": "online", 00:10:29.078 "raid_level": "raid1", 00:10:29.078 "superblock": true, 00:10:29.078 "num_base_bdevs": 2, 00:10:29.078 "num_base_bdevs_discovered": 1, 00:10:29.078 "num_base_bdevs_operational": 1, 00:10:29.078 "base_bdevs_list": [ 00:10:29.078 { 00:10:29.078 "name": null, 00:10:29.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.078 "is_configured": false, 00:10:29.078 "data_offset": 0, 00:10:29.078 "data_size": 63488 00:10:29.078 }, 00:10:29.078 { 00:10:29.078 "name": "BaseBdev2", 00:10:29.078 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:29.078 "is_configured": true, 00:10:29.078 "data_offset": 2048, 00:10:29.078 "data_size": 63488 00:10:29.078 } 00:10:29.078 ] 00:10:29.078 }' 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.078 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:29.078 [2024-10-01 14:35:20.733531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:29.078 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:29.078 Zero copy mechanism will not be used. 00:10:29.078 Running I/O for 60 seconds... 00:10:29.337 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:29.337 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.337 14:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:29.337 [2024-10-01 14:35:20.979515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:29.337 14:35:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.337 14:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:29.596 [2024-10-01 14:35:21.041532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:10:29.596 [2024-10-01 14:35:21.044390] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:29.596 [2024-10-01 14:35:21.183937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:29.955 [2024-10-01 14:35:21.314866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:29.955 [2024-10-01 14:35:21.315144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:29.955 [2024-10-01 14:35:21.575239] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:30.214 146.00 IOPS, 438.00 MiB/s [2024-10-01 14:35:21.790558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:30.471 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:30.471 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:30.471 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:30.471 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:30.471 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:30.471 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.471 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.471 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.472 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:30.472 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.472 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:30.472 "name": "raid_bdev1", 00:10:30.472 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:30.472 "strip_size_kb": 0, 00:10:30.472 "state": "online", 00:10:30.472 "raid_level": "raid1", 00:10:30.472 "superblock": true, 00:10:30.472 "num_base_bdevs": 2, 00:10:30.472 "num_base_bdevs_discovered": 2, 00:10:30.472 "num_base_bdevs_operational": 2, 00:10:30.472 "process": { 00:10:30.472 "type": "rebuild", 00:10:30.472 "target": "spare", 00:10:30.472 "progress": { 00:10:30.472 "blocks": 12288, 00:10:30.472 "percent": 19 00:10:30.472 } 00:10:30.472 }, 00:10:30.472 "base_bdevs_list": [ 00:10:30.472 { 00:10:30.472 "name": "spare", 00:10:30.472 "uuid": "1e40a052-865a-5f03-8384-28bc59398c86", 00:10:30.472 "is_configured": true, 00:10:30.472 "data_offset": 2048, 00:10:30.472 "data_size": 63488 00:10:30.472 }, 00:10:30.472 { 00:10:30.472 "name": "BaseBdev2", 00:10:30.472 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:30.472 "is_configured": true, 00:10:30.472 "data_offset": 2048, 00:10:30.472 "data_size": 63488 00:10:30.472 } 00:10:30.472 ] 00:10:30.472 }' 00:10:30.472 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:30.472 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:30.472 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:30.472 [2024-10-01 14:35:22.130079] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:10:30.472 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:30.472 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:30.472 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.472 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:30.472 [2024-10-01 14:35:22.144776] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:30.729 [2024-10-01 14:35:22.230196] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:30.729 [2024-10-01 14:35:22.239013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.729 [2024-10-01 14:35:22.239059] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:30.729 [2024-10-01 14:35:22.239072] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:30.729 [2024-10-01 14:35:22.276174] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.729 "name": "raid_bdev1", 00:10:30.729 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:30.729 "strip_size_kb": 0, 00:10:30.729 "state": "online", 00:10:30.729 "raid_level": "raid1", 00:10:30.729 "superblock": true, 00:10:30.729 "num_base_bdevs": 2, 00:10:30.729 "num_base_bdevs_discovered": 1, 00:10:30.729 "num_base_bdevs_operational": 1, 00:10:30.729 "base_bdevs_list": [ 00:10:30.729 { 00:10:30.729 "name": null, 00:10:30.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.729 "is_configured": false, 00:10:30.729 "data_offset": 0, 00:10:30.729 "data_size": 63488 00:10:30.729 }, 00:10:30.729 { 00:10:30.729 "name": "BaseBdev2", 00:10:30.729 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:30.729 "is_configured": true, 00:10:30.729 "data_offset": 2048, 00:10:30.729 "data_size": 63488 00:10:30.729 } 00:10:30.729 ] 00:10:30.729 }' 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.729 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:30.987 "name": "raid_bdev1", 00:10:30.987 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:30.987 "strip_size_kb": 0, 00:10:30.987 "state": "online", 00:10:30.987 "raid_level": "raid1", 00:10:30.987 "superblock": true, 00:10:30.987 "num_base_bdevs": 2, 00:10:30.987 "num_base_bdevs_discovered": 1, 00:10:30.987 "num_base_bdevs_operational": 1, 00:10:30.987 "base_bdevs_list": [ 00:10:30.987 { 00:10:30.987 "name": null, 00:10:30.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.987 "is_configured": false, 00:10:30.987 "data_offset": 0, 00:10:30.987 "data_size": 63488 00:10:30.987 }, 00:10:30.987 { 00:10:30.987 "name": "BaseBdev2", 00:10:30.987 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:30.987 "is_configured": true, 00:10:30.987 "data_offset": 2048, 00:10:30.987 "data_size": 63488 00:10:30.987 } 00:10:30.987 ] 00:10:30.987 }' 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:30.987 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:31.245 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:31.245 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:31.245 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.245 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:31.245 [2024-10-01 14:35:22.697911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:31.245 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.245 14:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:31.245 156.50 IOPS, 469.50 MiB/s [2024-10-01 14:35:22.746758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:31.245 [2024-10-01 14:35:22.748630] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:31.245 [2024-10-01 14:35:22.862819] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:31.245 [2024-10-01 14:35:22.863275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:10:31.504 [2024-10-01 14:35:23.071217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:31.504 [2024-10-01 14:35:23.071457] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:10:31.766 [2024-10-01 14:35:23.413629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:10:32.025 [2024-10-01 14:35:23.628828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:32.284 133.33 IOPS, 400.00 MiB/s 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:32.284 "name": "raid_bdev1", 00:10:32.284 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:32.284 "strip_size_kb": 0, 00:10:32.284 "state": "online", 00:10:32.284 "raid_level": "raid1", 00:10:32.284 "superblock": true, 00:10:32.284 "num_base_bdevs": 2, 00:10:32.284 "num_base_bdevs_discovered": 2, 00:10:32.284 "num_base_bdevs_operational": 2, 00:10:32.284 "process": { 00:10:32.284 "type": "rebuild", 00:10:32.284 "target": "spare", 00:10:32.284 "progress": { 00:10:32.284 "blocks": 10240, 00:10:32.284 "percent": 16 00:10:32.284 } 00:10:32.284 }, 00:10:32.284 "base_bdevs_list": [ 00:10:32.284 { 00:10:32.284 "name": "spare", 00:10:32.284 "uuid": "1e40a052-865a-5f03-8384-28bc59398c86", 00:10:32.284 "is_configured": true, 00:10:32.284 "data_offset": 2048, 00:10:32.284 "data_size": 63488 00:10:32.284 }, 00:10:32.284 { 00:10:32.284 "name": "BaseBdev2", 00:10:32.284 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:32.284 "is_configured": true, 00:10:32.284 "data_offset": 2048, 00:10:32.284 "data_size": 63488 00:10:32.284 } 00:10:32.284 ] 00:10:32.284 }' 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:10:32.284 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=343 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.284 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:32.284 "name": "raid_bdev1", 00:10:32.284 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:32.284 "strip_size_kb": 0, 00:10:32.284 "state": "online", 00:10:32.284 "raid_level": "raid1", 00:10:32.284 "superblock": true, 00:10:32.284 "num_base_bdevs": 2, 00:10:32.284 "num_base_bdevs_discovered": 2, 00:10:32.284 "num_base_bdevs_operational": 2, 00:10:32.284 "process": { 00:10:32.284 "type": "rebuild", 00:10:32.284 "target": "spare", 00:10:32.284 "progress": { 00:10:32.284 "blocks": 12288, 00:10:32.284 "percent": 19 00:10:32.284 } 00:10:32.284 }, 00:10:32.284 "base_bdevs_list": [ 00:10:32.284 { 00:10:32.284 "name": "spare", 00:10:32.284 "uuid": "1e40a052-865a-5f03-8384-28bc59398c86", 00:10:32.284 "is_configured": true, 00:10:32.284 "data_offset": 2048, 00:10:32.284 "data_size": 63488 00:10:32.284 }, 00:10:32.284 { 00:10:32.284 "name": "BaseBdev2", 00:10:32.285 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:32.285 "is_configured": true, 00:10:32.285 "data_offset": 2048, 00:10:32.285 "data_size": 63488 00:10:32.285 } 00:10:32.285 ] 00:10:32.285 }' 00:10:32.285 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:32.285 [2024-10-01 14:35:23.868492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:10:32.285 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:32.285 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:32.285 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:32.285 14:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:32.543 [2024-10-01 14:35:23.997873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:10:32.543 [2024-10-01 14:35:23.998121] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:10:32.802 [2024-10-01 14:35:24.466640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:10:33.323 118.25 IOPS, 354.75 MiB/s 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:33.323 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:33.323 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:33.323 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:33.323 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:33.323 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:33.323 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.323 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.323 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:33.323 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.323 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.323 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:33.323 "name": "raid_bdev1", 00:10:33.323 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:33.323 "strip_size_kb": 0, 00:10:33.323 "state": "online", 00:10:33.323 "raid_level": "raid1", 00:10:33.323 "superblock": true, 00:10:33.323 "num_base_bdevs": 2, 00:10:33.323 "num_base_bdevs_discovered": 2, 00:10:33.323 "num_base_bdevs_operational": 2, 00:10:33.323 "process": { 00:10:33.323 "type": "rebuild", 00:10:33.323 "target": "spare", 00:10:33.323 "progress": { 00:10:33.323 "blocks": 28672, 00:10:33.323 "percent": 45 00:10:33.323 } 00:10:33.323 }, 00:10:33.324 "base_bdevs_list": [ 00:10:33.324 { 00:10:33.324 "name": "spare", 00:10:33.324 "uuid": "1e40a052-865a-5f03-8384-28bc59398c86", 00:10:33.324 "is_configured": true, 00:10:33.324 "data_offset": 2048, 00:10:33.324 "data_size": 63488 00:10:33.324 }, 00:10:33.324 { 00:10:33.324 "name": "BaseBdev2", 00:10:33.324 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:33.324 "is_configured": true, 00:10:33.324 "data_offset": 2048, 00:10:33.324 "data_size": 63488 00:10:33.324 } 00:10:33.324 ] 00:10:33.324 }' 00:10:33.324 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:33.324 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:33.324 14:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:33.584 14:35:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:33.584 14:35:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:33.943 [2024-10-01 14:35:25.516017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:10:34.203 [2024-10-01 14:35:25.720461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:10:34.203 104.60 IOPS, 313.80 MiB/s [2024-10-01 14:35:25.833664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:34.462 "name": "raid_bdev1", 00:10:34.462 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:34.462 "strip_size_kb": 0, 00:10:34.462 "state": "online", 00:10:34.462 "raid_level": "raid1", 00:10:34.462 "superblock": true, 00:10:34.462 "num_base_bdevs": 2, 00:10:34.462 "num_base_bdevs_discovered": 2, 00:10:34.462 "num_base_bdevs_operational": 2, 00:10:34.462 "process": { 00:10:34.462 "type": "rebuild", 00:10:34.462 "target": "spare", 00:10:34.462 "progress": { 00:10:34.462 "blocks": 47104, 00:10:34.462 "percent": 74 00:10:34.462 } 00:10:34.462 }, 00:10:34.462 "base_bdevs_list": [ 00:10:34.462 { 00:10:34.462 "name": "spare", 00:10:34.462 "uuid": "1e40a052-865a-5f03-8384-28bc59398c86", 00:10:34.462 "is_configured": true, 00:10:34.462 "data_offset": 2048, 00:10:34.462 "data_size": 63488 00:10:34.462 }, 00:10:34.462 { 00:10:34.462 "name": "BaseBdev2", 00:10:34.462 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:34.462 "is_configured": true, 00:10:34.462 "data_offset": 2048, 00:10:34.462 "data_size": 63488 00:10:34.462 } 00:10:34.462 ] 00:10:34.462 }' 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:34.462 14:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:34.719 [2024-10-01 14:35:26.254943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:10:34.978 [2024-10-01 14:35:26.592519] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:10:35.493 94.67 IOPS, 284.00 MiB/s [2024-10-01 14:35:26.921190] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:35.493 [2024-10-01 14:35:27.026243] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:35.493 [2024-10-01 14:35:27.027926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.493 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:35.493 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:35.493 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:35.493 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:35.493 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:35.493 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:35.493 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.493 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.493 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.493 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:35.493 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.493 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:35.493 "name": "raid_bdev1", 00:10:35.493 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:35.493 "strip_size_kb": 0, 00:10:35.493 "state": "online", 00:10:35.493 "raid_level": "raid1", 00:10:35.493 "superblock": true, 00:10:35.493 "num_base_bdevs": 2, 00:10:35.493 "num_base_bdevs_discovered": 2, 00:10:35.493 "num_base_bdevs_operational": 2, 00:10:35.493 "base_bdevs_list": [ 00:10:35.493 { 00:10:35.493 "name": "spare", 00:10:35.494 "uuid": "1e40a052-865a-5f03-8384-28bc59398c86", 00:10:35.494 "is_configured": true, 00:10:35.494 "data_offset": 2048, 00:10:35.494 "data_size": 63488 00:10:35.494 }, 00:10:35.494 { 00:10:35.494 "name": "BaseBdev2", 00:10:35.494 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:35.494 "is_configured": true, 00:10:35.494 "data_offset": 2048, 00:10:35.494 "data_size": 63488 00:10:35.494 } 00:10:35.494 ] 00:10:35.494 }' 00:10:35.494 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:35.751 "name": "raid_bdev1", 00:10:35.751 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:35.751 "strip_size_kb": 0, 00:10:35.751 "state": "online", 00:10:35.751 "raid_level": "raid1", 00:10:35.751 "superblock": true, 00:10:35.751 "num_base_bdevs": 2, 00:10:35.751 "num_base_bdevs_discovered": 2, 00:10:35.751 "num_base_bdevs_operational": 2, 00:10:35.751 "base_bdevs_list": [ 00:10:35.751 { 00:10:35.751 "name": "spare", 00:10:35.751 "uuid": "1e40a052-865a-5f03-8384-28bc59398c86", 00:10:35.751 "is_configured": true, 00:10:35.751 "data_offset": 2048, 00:10:35.751 "data_size": 63488 00:10:35.751 }, 00:10:35.751 { 00:10:35.751 "name": "BaseBdev2", 00:10:35.751 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:35.751 "is_configured": true, 00:10:35.751 "data_offset": 2048, 00:10:35.751 "data_size": 63488 00:10:35.751 } 00:10:35.751 ] 00:10:35.751 }' 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.751 "name": "raid_bdev1", 00:10:35.751 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:35.751 "strip_size_kb": 0, 00:10:35.751 "state": "online", 00:10:35.751 "raid_level": "raid1", 00:10:35.751 "superblock": true, 00:10:35.751 "num_base_bdevs": 2, 00:10:35.751 "num_base_bdevs_discovered": 2, 00:10:35.751 "num_base_bdevs_operational": 2, 00:10:35.751 "base_bdevs_list": [ 00:10:35.751 { 00:10:35.751 "name": "spare", 00:10:35.751 "uuid": "1e40a052-865a-5f03-8384-28bc59398c86", 00:10:35.751 "is_configured": true, 00:10:35.751 "data_offset": 2048, 00:10:35.751 "data_size": 63488 00:10:35.751 }, 00:10:35.751 { 00:10:35.751 "name": "BaseBdev2", 00:10:35.751 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:35.751 "is_configured": true, 00:10:35.751 "data_offset": 2048, 00:10:35.751 "data_size": 63488 00:10:35.751 } 00:10:35.751 ] 00:10:35.751 }' 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.751 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:36.009 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:36.009 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.009 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:36.009 [2024-10-01 14:35:27.630428] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.009 [2024-10-01 14:35:27.630559] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.009 00:10:36.009 Latency(us) 00:10:36.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.009 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:10:36.009 raid_bdev1 : 6.93 86.57 259.71 0.00 0.00 15724.76 261.51 115343.36 00:10:36.009 =================================================================================================================== 00:10:36.009 Total : 86.57 259.71 0.00 0.00 15724.76 261.51 115343.36 00:10:36.009 { 00:10:36.009 "results": [ 00:10:36.009 { 00:10:36.009 "job": "raid_bdev1", 00:10:36.009 "core_mask": "0x1", 00:10:36.009 "workload": "randrw", 00:10:36.009 "percentage": 50, 00:10:36.009 "status": "finished", 00:10:36.009 "queue_depth": 2, 00:10:36.009 "io_size": 3145728, 00:10:36.009 "runtime": 6.930824, 00:10:36.009 "iops": 86.56979314436494, 00:10:36.009 "mibps": 259.7093794330948, 00:10:36.009 "io_failed": 0, 00:10:36.009 "io_timeout": 0, 00:10:36.009 "avg_latency_us": 15724.759302564102, 00:10:36.009 "min_latency_us": 261.51384615384615, 00:10:36.009 "max_latency_us": 115343.36 00:10:36.009 } 00:10:36.009 ], 00:10:36.009 "core_count": 1 00:10:36.009 } 00:10:36.009 [2024-10-01 14:35:27.678534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.009 [2024-10-01 14:35:27.678573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.009 [2024-10-01 14:35:27.678640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.009 [2024-10-01 14:35:27.678653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:36.009 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.009 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.009 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.009 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:36.009 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:10:36.267 /dev/nbd0 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:36.267 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.525 1+0 records in 00:10:36.525 1+0 records out 00:10:36.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246055 s, 16.6 MB/s 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:36.525 14:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:10:36.525 /dev/nbd1 00:10:36.525 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:36.525 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:36.525 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:36.525 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.526 1+0 records in 00:10:36.526 1+0 records out 00:10:36.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180535 s, 22.7 MB/s 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:36.526 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:10:36.784 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:10:36.784 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:36.784 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:10:36.784 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:36.784 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:10:36.784 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.784 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:37.042 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.300 [2024-10-01 14:35:28.744190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:37.300 [2024-10-01 14:35:28.744239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.300 [2024-10-01 14:35:28.744255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:37.300 [2024-10-01 14:35:28.744265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.300 [2024-10-01 14:35:28.746160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.300 [2024-10-01 14:35:28.746195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:37.300 [2024-10-01 14:35:28.746269] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:37.300 [2024-10-01 14:35:28.746309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:37.300 [2024-10-01 14:35:28.746416] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.300 spare 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.300 [2024-10-01 14:35:28.846496] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:37.300 [2024-10-01 14:35:28.846522] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:37.300 [2024-10-01 14:35:28.846805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:10:37.300 [2024-10-01 14:35:28.846957] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:37.300 [2024-10-01 14:35:28.846968] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:37.300 [2024-10-01 14:35:28.847113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.300 "name": "raid_bdev1", 00:10:37.300 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:37.300 "strip_size_kb": 0, 00:10:37.300 "state": "online", 00:10:37.300 "raid_level": "raid1", 00:10:37.300 "superblock": true, 00:10:37.300 "num_base_bdevs": 2, 00:10:37.300 "num_base_bdevs_discovered": 2, 00:10:37.300 "num_base_bdevs_operational": 2, 00:10:37.300 "base_bdevs_list": [ 00:10:37.300 { 00:10:37.300 "name": "spare", 00:10:37.300 "uuid": "1e40a052-865a-5f03-8384-28bc59398c86", 00:10:37.300 "is_configured": true, 00:10:37.300 "data_offset": 2048, 00:10:37.300 "data_size": 63488 00:10:37.300 }, 00:10:37.300 { 00:10:37.300 "name": "BaseBdev2", 00:10:37.300 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:37.300 "is_configured": true, 00:10:37.300 "data_offset": 2048, 00:10:37.300 "data_size": 63488 00:10:37.300 } 00:10:37.300 ] 00:10:37.300 }' 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.300 14:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:37.560 "name": "raid_bdev1", 00:10:37.560 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:37.560 "strip_size_kb": 0, 00:10:37.560 "state": "online", 00:10:37.560 "raid_level": "raid1", 00:10:37.560 "superblock": true, 00:10:37.560 "num_base_bdevs": 2, 00:10:37.560 "num_base_bdevs_discovered": 2, 00:10:37.560 "num_base_bdevs_operational": 2, 00:10:37.560 "base_bdevs_list": [ 00:10:37.560 { 00:10:37.560 "name": "spare", 00:10:37.560 "uuid": "1e40a052-865a-5f03-8384-28bc59398c86", 00:10:37.560 "is_configured": true, 00:10:37.560 "data_offset": 2048, 00:10:37.560 "data_size": 63488 00:10:37.560 }, 00:10:37.560 { 00:10:37.560 "name": "BaseBdev2", 00:10:37.560 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:37.560 "is_configured": true, 00:10:37.560 "data_offset": 2048, 00:10:37.560 "data_size": 63488 00:10:37.560 } 00:10:37.560 ] 00:10:37.560 }' 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:37.560 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.818 [2024-10-01 14:35:29.308421] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.818 "name": "raid_bdev1", 00:10:37.818 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:37.818 "strip_size_kb": 0, 00:10:37.818 "state": "online", 00:10:37.818 "raid_level": "raid1", 00:10:37.818 "superblock": true, 00:10:37.818 "num_base_bdevs": 2, 00:10:37.818 "num_base_bdevs_discovered": 1, 00:10:37.818 "num_base_bdevs_operational": 1, 00:10:37.818 "base_bdevs_list": [ 00:10:37.818 { 00:10:37.818 "name": null, 00:10:37.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.818 "is_configured": false, 00:10:37.818 "data_offset": 0, 00:10:37.818 "data_size": 63488 00:10:37.818 }, 00:10:37.818 { 00:10:37.818 "name": "BaseBdev2", 00:10:37.818 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:37.818 "is_configured": true, 00:10:37.818 "data_offset": 2048, 00:10:37.818 "data_size": 63488 00:10:37.818 } 00:10:37.818 ] 00:10:37.818 }' 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.818 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:38.075 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:38.075 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.075 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:38.075 [2024-10-01 14:35:29.632522] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:38.075 [2024-10-01 14:35:29.632672] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:38.075 [2024-10-01 14:35:29.632684] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:38.075 [2024-10-01 14:35:29.632732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:38.075 [2024-10-01 14:35:29.641240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:10:38.075 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.075 14:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:10:38.075 [2024-10-01 14:35:29.642803] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:39.008 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:39.008 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:39.008 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:39.008 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:39.008 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:39.008 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.008 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.008 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:39.008 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.008 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.008 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:39.008 "name": "raid_bdev1", 00:10:39.008 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:39.008 "strip_size_kb": 0, 00:10:39.008 "state": "online", 00:10:39.008 "raid_level": "raid1", 00:10:39.008 "superblock": true, 00:10:39.008 "num_base_bdevs": 2, 00:10:39.008 "num_base_bdevs_discovered": 2, 00:10:39.008 "num_base_bdevs_operational": 2, 00:10:39.008 "process": { 00:10:39.008 "type": "rebuild", 00:10:39.008 "target": "spare", 00:10:39.008 "progress": { 00:10:39.008 "blocks": 20480, 00:10:39.008 "percent": 32 00:10:39.008 } 00:10:39.008 }, 00:10:39.008 "base_bdevs_list": [ 00:10:39.008 { 00:10:39.008 "name": "spare", 00:10:39.008 "uuid": "1e40a052-865a-5f03-8384-28bc59398c86", 00:10:39.008 "is_configured": true, 00:10:39.008 "data_offset": 2048, 00:10:39.008 "data_size": 63488 00:10:39.008 }, 00:10:39.008 { 00:10:39.008 "name": "BaseBdev2", 00:10:39.008 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:39.008 "is_configured": true, 00:10:39.008 "data_offset": 2048, 00:10:39.008 "data_size": 63488 00:10:39.008 } 00:10:39.008 ] 00:10:39.008 }' 00:10:39.008 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:39.266 [2024-10-01 14:35:30.745165] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:39.266 [2024-10-01 14:35:30.747896] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:39.266 [2024-10-01 14:35:30.747945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.266 [2024-10-01 14:35:30.747959] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:39.266 [2024-10-01 14:35:30.747966] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.266 "name": "raid_bdev1", 00:10:39.266 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:39.266 "strip_size_kb": 0, 00:10:39.266 "state": "online", 00:10:39.266 "raid_level": "raid1", 00:10:39.266 "superblock": true, 00:10:39.266 "num_base_bdevs": 2, 00:10:39.266 "num_base_bdevs_discovered": 1, 00:10:39.266 "num_base_bdevs_operational": 1, 00:10:39.266 "base_bdevs_list": [ 00:10:39.266 { 00:10:39.266 "name": null, 00:10:39.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.266 "is_configured": false, 00:10:39.266 "data_offset": 0, 00:10:39.266 "data_size": 63488 00:10:39.266 }, 00:10:39.266 { 00:10:39.266 "name": "BaseBdev2", 00:10:39.266 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:39.266 "is_configured": true, 00:10:39.266 "data_offset": 2048, 00:10:39.266 "data_size": 63488 00:10:39.266 } 00:10:39.266 ] 00:10:39.266 }' 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.266 14:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:39.523 14:35:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:39.523 14:35:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.523 14:35:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:39.523 [2024-10-01 14:35:31.077770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:39.523 [2024-10-01 14:35:31.077823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.523 [2024-10-01 14:35:31.077842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:39.523 [2024-10-01 14:35:31.077850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.523 [2024-10-01 14:35:31.078248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.523 [2024-10-01 14:35:31.078261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:39.523 [2024-10-01 14:35:31.078340] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:39.524 [2024-10-01 14:35:31.078349] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:39.524 [2024-10-01 14:35:31.078359] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:39.524 [2024-10-01 14:35:31.078374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:39.524 [2024-10-01 14:35:31.087044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:10:39.524 spare 00:10:39.524 14:35:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.524 14:35:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:10:39.524 [2024-10-01 14:35:31.088730] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:40.456 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:40.457 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:40.457 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:40.457 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:40.457 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:40.457 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.457 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.457 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.457 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:40.457 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.457 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:40.457 "name": "raid_bdev1", 00:10:40.457 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:40.457 "strip_size_kb": 0, 00:10:40.457 "state": "online", 00:10:40.457 "raid_level": "raid1", 00:10:40.457 "superblock": true, 00:10:40.457 "num_base_bdevs": 2, 00:10:40.457 "num_base_bdevs_discovered": 2, 00:10:40.457 "num_base_bdevs_operational": 2, 00:10:40.457 "process": { 00:10:40.457 "type": "rebuild", 00:10:40.457 "target": "spare", 00:10:40.457 "progress": { 00:10:40.457 "blocks": 20480, 00:10:40.457 "percent": 32 00:10:40.457 } 00:10:40.457 }, 00:10:40.457 "base_bdevs_list": [ 00:10:40.457 { 00:10:40.457 "name": "spare", 00:10:40.457 "uuid": "1e40a052-865a-5f03-8384-28bc59398c86", 00:10:40.457 "is_configured": true, 00:10:40.457 "data_offset": 2048, 00:10:40.457 "data_size": 63488 00:10:40.457 }, 00:10:40.457 { 00:10:40.457 "name": "BaseBdev2", 00:10:40.457 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:40.457 "is_configured": true, 00:10:40.457 "data_offset": 2048, 00:10:40.457 "data_size": 63488 00:10:40.457 } 00:10:40.457 ] 00:10:40.457 }' 00:10:40.457 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:40.715 [2024-10-01 14:35:32.187118] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:40.715 [2024-10-01 14:35:32.193829] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:40.715 [2024-10-01 14:35:32.193976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.715 [2024-10-01 14:35:32.193991] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:40.715 [2024-10-01 14:35:32.193999] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.715 "name": "raid_bdev1", 00:10:40.715 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:40.715 "strip_size_kb": 0, 00:10:40.715 "state": "online", 00:10:40.715 "raid_level": "raid1", 00:10:40.715 "superblock": true, 00:10:40.715 "num_base_bdevs": 2, 00:10:40.715 "num_base_bdevs_discovered": 1, 00:10:40.715 "num_base_bdevs_operational": 1, 00:10:40.715 "base_bdevs_list": [ 00:10:40.715 { 00:10:40.715 "name": null, 00:10:40.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.715 "is_configured": false, 00:10:40.715 "data_offset": 0, 00:10:40.715 "data_size": 63488 00:10:40.715 }, 00:10:40.715 { 00:10:40.715 "name": "BaseBdev2", 00:10:40.715 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:40.715 "is_configured": true, 00:10:40.715 "data_offset": 2048, 00:10:40.715 "data_size": 63488 00:10:40.715 } 00:10:40.715 ] 00:10:40.715 }' 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.715 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:40.973 "name": "raid_bdev1", 00:10:40.973 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:40.973 "strip_size_kb": 0, 00:10:40.973 "state": "online", 00:10:40.973 "raid_level": "raid1", 00:10:40.973 "superblock": true, 00:10:40.973 "num_base_bdevs": 2, 00:10:40.973 "num_base_bdevs_discovered": 1, 00:10:40.973 "num_base_bdevs_operational": 1, 00:10:40.973 "base_bdevs_list": [ 00:10:40.973 { 00:10:40.973 "name": null, 00:10:40.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.973 "is_configured": false, 00:10:40.973 "data_offset": 0, 00:10:40.973 "data_size": 63488 00:10:40.973 }, 00:10:40.973 { 00:10:40.973 "name": "BaseBdev2", 00:10:40.973 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:40.973 "is_configured": true, 00:10:40.973 "data_offset": 2048, 00:10:40.973 "data_size": 63488 00:10:40.973 } 00:10:40.973 ] 00:10:40.973 }' 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:40.973 [2024-10-01 14:35:32.627538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:40.973 [2024-10-01 14:35:32.627587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.973 [2024-10-01 14:35:32.627603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:40.973 [2024-10-01 14:35:32.627614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.973 [2024-10-01 14:35:32.627971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.973 [2024-10-01 14:35:32.627994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:40.973 [2024-10-01 14:35:32.628055] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:10:40.973 [2024-10-01 14:35:32.628068] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:10:40.973 [2024-10-01 14:35:32.628075] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:40.973 [2024-10-01 14:35:32.628087] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:10:40.973 BaseBdev1 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.973 14:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:10:42.394 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:42.394 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.394 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.394 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.394 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.394 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:42.394 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.395 "name": "raid_bdev1", 00:10:42.395 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:42.395 "strip_size_kb": 0, 00:10:42.395 "state": "online", 00:10:42.395 "raid_level": "raid1", 00:10:42.395 "superblock": true, 00:10:42.395 "num_base_bdevs": 2, 00:10:42.395 "num_base_bdevs_discovered": 1, 00:10:42.395 "num_base_bdevs_operational": 1, 00:10:42.395 "base_bdevs_list": [ 00:10:42.395 { 00:10:42.395 "name": null, 00:10:42.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.395 "is_configured": false, 00:10:42.395 "data_offset": 0, 00:10:42.395 "data_size": 63488 00:10:42.395 }, 00:10:42.395 { 00:10:42.395 "name": "BaseBdev2", 00:10:42.395 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:42.395 "is_configured": true, 00:10:42.395 "data_offset": 2048, 00:10:42.395 "data_size": 63488 00:10:42.395 } 00:10:42.395 ] 00:10:42.395 }' 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:42.395 "name": "raid_bdev1", 00:10:42.395 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:42.395 "strip_size_kb": 0, 00:10:42.395 "state": "online", 00:10:42.395 "raid_level": "raid1", 00:10:42.395 "superblock": true, 00:10:42.395 "num_base_bdevs": 2, 00:10:42.395 "num_base_bdevs_discovered": 1, 00:10:42.395 "num_base_bdevs_operational": 1, 00:10:42.395 "base_bdevs_list": [ 00:10:42.395 { 00:10:42.395 "name": null, 00:10:42.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.395 "is_configured": false, 00:10:42.395 "data_offset": 0, 00:10:42.395 "data_size": 63488 00:10:42.395 }, 00:10:42.395 { 00:10:42.395 "name": "BaseBdev2", 00:10:42.395 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:42.395 "is_configured": true, 00:10:42.395 "data_offset": 2048, 00:10:42.395 "data_size": 63488 00:10:42.395 } 00:10:42.395 ] 00:10:42.395 }' 00:10:42.395 14:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:42.395 [2024-10-01 14:35:34.052006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.395 [2024-10-01 14:35:34.052125] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:10:42.395 [2024-10-01 14:35:34.052135] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:42.395 request: 00:10:42.395 { 00:10:42.395 "base_bdev": "BaseBdev1", 00:10:42.395 "raid_bdev": "raid_bdev1", 00:10:42.395 "method": "bdev_raid_add_base_bdev", 00:10:42.395 "req_id": 1 00:10:42.395 } 00:10:42.395 Got JSON-RPC error response 00:10:42.395 response: 00:10:42.395 { 00:10:42.395 "code": -22, 00:10:42.395 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:10:42.395 } 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:42.395 14:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.761 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.762 "name": "raid_bdev1", 00:10:43.762 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:43.762 "strip_size_kb": 0, 00:10:43.762 "state": "online", 00:10:43.762 "raid_level": "raid1", 00:10:43.762 "superblock": true, 00:10:43.762 "num_base_bdevs": 2, 00:10:43.762 "num_base_bdevs_discovered": 1, 00:10:43.762 "num_base_bdevs_operational": 1, 00:10:43.762 "base_bdevs_list": [ 00:10:43.762 { 00:10:43.762 "name": null, 00:10:43.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.762 "is_configured": false, 00:10:43.762 "data_offset": 0, 00:10:43.762 "data_size": 63488 00:10:43.762 }, 00:10:43.762 { 00:10:43.762 "name": "BaseBdev2", 00:10:43.762 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:43.762 "is_configured": true, 00:10:43.762 "data_offset": 2048, 00:10:43.762 "data_size": 63488 00:10:43.762 } 00:10:43.762 ] 00:10:43.762 }' 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:43.762 "name": "raid_bdev1", 00:10:43.762 "uuid": "1407132d-97fc-4be0-ac18-97f869ce0ddf", 00:10:43.762 "strip_size_kb": 0, 00:10:43.762 "state": "online", 00:10:43.762 "raid_level": "raid1", 00:10:43.762 "superblock": true, 00:10:43.762 "num_base_bdevs": 2, 00:10:43.762 "num_base_bdevs_discovered": 1, 00:10:43.762 "num_base_bdevs_operational": 1, 00:10:43.762 "base_bdevs_list": [ 00:10:43.762 { 00:10:43.762 "name": null, 00:10:43.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.762 "is_configured": false, 00:10:43.762 "data_offset": 0, 00:10:43.762 "data_size": 63488 00:10:43.762 }, 00:10:43.762 { 00:10:43.762 "name": "BaseBdev2", 00:10:43.762 "uuid": "abf90ce4-a7cf-565c-836b-f09f60a93054", 00:10:43.762 "is_configured": true, 00:10:43.762 "data_offset": 2048, 00:10:43.762 "data_size": 63488 00:10:43.762 } 00:10:43.762 ] 00:10:43.762 }' 00:10:43.762 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 75002 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 75002 ']' 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 75002 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75002 00:10:44.018 killing process with pid 75002 00:10:44.018 Received shutdown signal, test time was about 14.765695 seconds 00:10:44.018 00:10:44.018 Latency(us) 00:10:44.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.018 =================================================================================================================== 00:10:44.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75002' 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 75002 00:10:44.018 [2024-10-01 14:35:35.501253] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:44.018 14:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 75002 00:10:44.018 [2024-10-01 14:35:35.501351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.018 [2024-10-01 14:35:35.501395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.018 [2024-10-01 14:35:35.501402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:44.018 [2024-10-01 14:35:35.615460] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:10:44.948 ************************************ 00:10:44.948 END TEST raid_rebuild_test_sb_io 00:10:44.948 ************************************ 00:10:44.948 00:10:44.948 real 0m17.077s 00:10:44.948 user 0m21.742s 00:10:44.948 sys 0m1.426s 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:10:44.948 14:35:36 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:44.948 14:35:36 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:10:44.948 14:35:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:44.948 14:35:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:44.948 14:35:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:44.948 ************************************ 00:10:44.948 START TEST raid_rebuild_test 00:10:44.948 ************************************ 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:44.948 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:44.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75665 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75665 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75665 ']' 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.949 14:35:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:44.949 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:44.949 Zero copy mechanism will not be used. 00:10:44.949 [2024-10-01 14:35:36.440847] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:10:44.949 [2024-10-01 14:35:36.440976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75665 ] 00:10:44.949 [2024-10-01 14:35:36.593068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.206 [2024-10-01 14:35:36.755880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.206 [2024-10-01 14:35:36.873412] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.206 [2024-10-01 14:35:36.873454] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.771 BaseBdev1_malloc 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.771 [2024-10-01 14:35:37.281890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:45.771 [2024-10-01 14:35:37.281950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.771 [2024-10-01 14:35:37.281968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:45.771 [2024-10-01 14:35:37.281979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.771 [2024-10-01 14:35:37.283872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.771 [2024-10-01 14:35:37.283905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:45.771 BaseBdev1 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.771 BaseBdev2_malloc 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.771 [2024-10-01 14:35:37.332219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:45.771 [2024-10-01 14:35:37.332286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.771 [2024-10-01 14:35:37.332304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:45.771 [2024-10-01 14:35:37.332316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.771 [2024-10-01 14:35:37.334234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.771 [2024-10-01 14:35:37.334271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:45.771 BaseBdev2 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.771 BaseBdev3_malloc 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.771 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.771 [2024-10-01 14:35:37.365239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:10:45.771 [2024-10-01 14:35:37.365289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.771 [2024-10-01 14:35:37.365306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:45.771 [2024-10-01 14:35:37.365316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.772 [2024-10-01 14:35:37.367076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.772 [2024-10-01 14:35:37.367109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:45.772 BaseBdev3 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.772 BaseBdev4_malloc 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.772 [2024-10-01 14:35:37.397731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:10:45.772 [2024-10-01 14:35:37.397777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.772 [2024-10-01 14:35:37.397792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:45.772 [2024-10-01 14:35:37.397800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.772 [2024-10-01 14:35:37.399560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.772 [2024-10-01 14:35:37.399594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:45.772 BaseBdev4 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.772 spare_malloc 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.772 spare_delay 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.772 [2024-10-01 14:35:37.437170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:45.772 [2024-10-01 14:35:37.437212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.772 [2024-10-01 14:35:37.437225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:45.772 [2024-10-01 14:35:37.437234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.772 [2024-10-01 14:35:37.438987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.772 [2024-10-01 14:35:37.439018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:45.772 spare 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.772 [2024-10-01 14:35:37.445235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.772 [2024-10-01 14:35:37.446801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.772 [2024-10-01 14:35:37.446859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.772 [2024-10-01 14:35:37.446902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:45.772 [2024-10-01 14:35:37.446968] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:45.772 [2024-10-01 14:35:37.446977] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:45.772 [2024-10-01 14:35:37.447192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:45.772 [2024-10-01 14:35:37.447311] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:45.772 [2024-10-01 14:35:37.447319] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:45.772 [2024-10-01 14:35:37.447430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.772 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.030 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.030 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.030 "name": "raid_bdev1", 00:10:46.030 "uuid": "a9e4897e-d093-40ff-9748-93ba54bae251", 00:10:46.030 "strip_size_kb": 0, 00:10:46.030 "state": "online", 00:10:46.030 "raid_level": "raid1", 00:10:46.030 "superblock": false, 00:10:46.030 "num_base_bdevs": 4, 00:10:46.030 "num_base_bdevs_discovered": 4, 00:10:46.030 "num_base_bdevs_operational": 4, 00:10:46.030 "base_bdevs_list": [ 00:10:46.030 { 00:10:46.030 "name": "BaseBdev1", 00:10:46.030 "uuid": "522dece6-145f-5ae9-b7bd-c43db61458bc", 00:10:46.030 "is_configured": true, 00:10:46.030 "data_offset": 0, 00:10:46.030 "data_size": 65536 00:10:46.030 }, 00:10:46.030 { 00:10:46.030 "name": "BaseBdev2", 00:10:46.030 "uuid": "9fd4a399-e608-5cf1-bceb-815c0686358b", 00:10:46.030 "is_configured": true, 00:10:46.030 "data_offset": 0, 00:10:46.030 "data_size": 65536 00:10:46.030 }, 00:10:46.030 { 00:10:46.030 "name": "BaseBdev3", 00:10:46.030 "uuid": "06363e74-b2b5-51de-ab39-b144cfda97be", 00:10:46.030 "is_configured": true, 00:10:46.030 "data_offset": 0, 00:10:46.030 "data_size": 65536 00:10:46.030 }, 00:10:46.030 { 00:10:46.030 "name": "BaseBdev4", 00:10:46.030 "uuid": "2cfdd343-22ba-5276-85e1-793947a92c29", 00:10:46.030 "is_configured": true, 00:10:46.030 "data_offset": 0, 00:10:46.030 "data_size": 65536 00:10:46.030 } 00:10:46.030 ] 00:10:46.030 }' 00:10:46.030 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.030 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:46.287 [2024-10-01 14:35:37.757563] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:46.287 14:35:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:46.545 [2024-10-01 14:35:38.001346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:10:46.545 /dev/nbd0 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:46.545 1+0 records in 00:10:46.545 1+0 records out 00:10:46.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016984 s, 24.1 MB/s 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:46.545 14:35:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:10:53.096 65536+0 records in 00:10:53.096 65536+0 records out 00:10:53.096 33554432 bytes (34 MB, 32 MiB) copied, 5.5193 s, 6.1 MB/s 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:53.096 [2024-10-01 14:35:43.783640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.096 [2024-10-01 14:35:43.807697] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.096 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.097 "name": "raid_bdev1", 00:10:53.097 "uuid": "a9e4897e-d093-40ff-9748-93ba54bae251", 00:10:53.097 "strip_size_kb": 0, 00:10:53.097 "state": "online", 00:10:53.097 "raid_level": "raid1", 00:10:53.097 "superblock": false, 00:10:53.097 "num_base_bdevs": 4, 00:10:53.097 "num_base_bdevs_discovered": 3, 00:10:53.097 "num_base_bdevs_operational": 3, 00:10:53.097 "base_bdevs_list": [ 00:10:53.097 { 00:10:53.097 "name": null, 00:10:53.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.097 "is_configured": false, 00:10:53.097 "data_offset": 0, 00:10:53.097 "data_size": 65536 00:10:53.097 }, 00:10:53.097 { 00:10:53.097 "name": "BaseBdev2", 00:10:53.097 "uuid": "9fd4a399-e608-5cf1-bceb-815c0686358b", 00:10:53.097 "is_configured": true, 00:10:53.097 "data_offset": 0, 00:10:53.097 "data_size": 65536 00:10:53.097 }, 00:10:53.097 { 00:10:53.097 "name": "BaseBdev3", 00:10:53.097 "uuid": "06363e74-b2b5-51de-ab39-b144cfda97be", 00:10:53.097 "is_configured": true, 00:10:53.097 "data_offset": 0, 00:10:53.097 "data_size": 65536 00:10:53.097 }, 00:10:53.097 { 00:10:53.097 "name": "BaseBdev4", 00:10:53.097 "uuid": "2cfdd343-22ba-5276-85e1-793947a92c29", 00:10:53.097 "is_configured": true, 00:10:53.097 "data_offset": 0, 00:10:53.097 "data_size": 65536 00:10:53.097 } 00:10:53.097 ] 00:10:53.097 }' 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.097 14:35:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.097 14:35:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:53.097 14:35:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.097 14:35:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.097 [2024-10-01 14:35:44.127764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:53.097 [2024-10-01 14:35:44.135527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:10:53.097 14:35:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.097 14:35:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:53.097 [2024-10-01 14:35:44.137111] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:53.663 "name": "raid_bdev1", 00:10:53.663 "uuid": "a9e4897e-d093-40ff-9748-93ba54bae251", 00:10:53.663 "strip_size_kb": 0, 00:10:53.663 "state": "online", 00:10:53.663 "raid_level": "raid1", 00:10:53.663 "superblock": false, 00:10:53.663 "num_base_bdevs": 4, 00:10:53.663 "num_base_bdevs_discovered": 4, 00:10:53.663 "num_base_bdevs_operational": 4, 00:10:53.663 "process": { 00:10:53.663 "type": "rebuild", 00:10:53.663 "target": "spare", 00:10:53.663 "progress": { 00:10:53.663 "blocks": 20480, 00:10:53.663 "percent": 31 00:10:53.663 } 00:10:53.663 }, 00:10:53.663 "base_bdevs_list": [ 00:10:53.663 { 00:10:53.663 "name": "spare", 00:10:53.663 "uuid": "33012c57-9bb9-5509-9361-f409c1f366b8", 00:10:53.663 "is_configured": true, 00:10:53.663 "data_offset": 0, 00:10:53.663 "data_size": 65536 00:10:53.663 }, 00:10:53.663 { 00:10:53.663 "name": "BaseBdev2", 00:10:53.663 "uuid": "9fd4a399-e608-5cf1-bceb-815c0686358b", 00:10:53.663 "is_configured": true, 00:10:53.663 "data_offset": 0, 00:10:53.663 "data_size": 65536 00:10:53.663 }, 00:10:53.663 { 00:10:53.663 "name": "BaseBdev3", 00:10:53.663 "uuid": "06363e74-b2b5-51de-ab39-b144cfda97be", 00:10:53.663 "is_configured": true, 00:10:53.663 "data_offset": 0, 00:10:53.663 "data_size": 65536 00:10:53.663 }, 00:10:53.663 { 00:10:53.663 "name": "BaseBdev4", 00:10:53.663 "uuid": "2cfdd343-22ba-5276-85e1-793947a92c29", 00:10:53.663 "is_configured": true, 00:10:53.663 "data_offset": 0, 00:10:53.663 "data_size": 65536 00:10:53.663 } 00:10:53.663 ] 00:10:53.663 }' 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.663 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.663 [2024-10-01 14:35:45.243459] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:53.663 [2024-10-01 14:35:45.342717] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:53.663 [2024-10-01 14:35:45.342785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.663 [2024-10-01 14:35:45.342799] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:53.663 [2024-10-01 14:35:45.342808] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:53.921 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.921 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:53.921 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.921 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.921 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.921 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.921 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.921 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.921 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.921 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.921 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.921 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.921 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.922 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.922 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.922 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.922 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.922 "name": "raid_bdev1", 00:10:53.922 "uuid": "a9e4897e-d093-40ff-9748-93ba54bae251", 00:10:53.922 "strip_size_kb": 0, 00:10:53.922 "state": "online", 00:10:53.922 "raid_level": "raid1", 00:10:53.922 "superblock": false, 00:10:53.922 "num_base_bdevs": 4, 00:10:53.922 "num_base_bdevs_discovered": 3, 00:10:53.922 "num_base_bdevs_operational": 3, 00:10:53.922 "base_bdevs_list": [ 00:10:53.922 { 00:10:53.922 "name": null, 00:10:53.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.922 "is_configured": false, 00:10:53.922 "data_offset": 0, 00:10:53.922 "data_size": 65536 00:10:53.922 }, 00:10:53.922 { 00:10:53.922 "name": "BaseBdev2", 00:10:53.922 "uuid": "9fd4a399-e608-5cf1-bceb-815c0686358b", 00:10:53.922 "is_configured": true, 00:10:53.922 "data_offset": 0, 00:10:53.922 "data_size": 65536 00:10:53.922 }, 00:10:53.922 { 00:10:53.922 "name": "BaseBdev3", 00:10:53.922 "uuid": "06363e74-b2b5-51de-ab39-b144cfda97be", 00:10:53.922 "is_configured": true, 00:10:53.922 "data_offset": 0, 00:10:53.922 "data_size": 65536 00:10:53.922 }, 00:10:53.922 { 00:10:53.922 "name": "BaseBdev4", 00:10:53.922 "uuid": "2cfdd343-22ba-5276-85e1-793947a92c29", 00:10:53.922 "is_configured": true, 00:10:53.922 "data_offset": 0, 00:10:53.922 "data_size": 65536 00:10:53.922 } 00:10:53.922 ] 00:10:53.922 }' 00:10:53.922 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.922 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:54.179 "name": "raid_bdev1", 00:10:54.179 "uuid": "a9e4897e-d093-40ff-9748-93ba54bae251", 00:10:54.179 "strip_size_kb": 0, 00:10:54.179 "state": "online", 00:10:54.179 "raid_level": "raid1", 00:10:54.179 "superblock": false, 00:10:54.179 "num_base_bdevs": 4, 00:10:54.179 "num_base_bdevs_discovered": 3, 00:10:54.179 "num_base_bdevs_operational": 3, 00:10:54.179 "base_bdevs_list": [ 00:10:54.179 { 00:10:54.179 "name": null, 00:10:54.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.179 "is_configured": false, 00:10:54.179 "data_offset": 0, 00:10:54.179 "data_size": 65536 00:10:54.179 }, 00:10:54.179 { 00:10:54.179 "name": "BaseBdev2", 00:10:54.179 "uuid": "9fd4a399-e608-5cf1-bceb-815c0686358b", 00:10:54.179 "is_configured": true, 00:10:54.179 "data_offset": 0, 00:10:54.179 "data_size": 65536 00:10:54.179 }, 00:10:54.179 { 00:10:54.179 "name": "BaseBdev3", 00:10:54.179 "uuid": "06363e74-b2b5-51de-ab39-b144cfda97be", 00:10:54.179 "is_configured": true, 00:10:54.179 "data_offset": 0, 00:10:54.179 "data_size": 65536 00:10:54.179 }, 00:10:54.179 { 00:10:54.179 "name": "BaseBdev4", 00:10:54.179 "uuid": "2cfdd343-22ba-5276-85e1-793947a92c29", 00:10:54.179 "is_configured": true, 00:10:54.179 "data_offset": 0, 00:10:54.179 "data_size": 65536 00:10:54.179 } 00:10:54.179 ] 00:10:54.179 }' 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.179 [2024-10-01 14:35:45.791025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:54.179 [2024-10-01 14:35:45.798663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.179 14:35:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:54.179 [2024-10-01 14:35:45.800276] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:55.550 "name": "raid_bdev1", 00:10:55.550 "uuid": "a9e4897e-d093-40ff-9748-93ba54bae251", 00:10:55.550 "strip_size_kb": 0, 00:10:55.550 "state": "online", 00:10:55.550 "raid_level": "raid1", 00:10:55.550 "superblock": false, 00:10:55.550 "num_base_bdevs": 4, 00:10:55.550 "num_base_bdevs_discovered": 4, 00:10:55.550 "num_base_bdevs_operational": 4, 00:10:55.550 "process": { 00:10:55.550 "type": "rebuild", 00:10:55.550 "target": "spare", 00:10:55.550 "progress": { 00:10:55.550 "blocks": 20480, 00:10:55.550 "percent": 31 00:10:55.550 } 00:10:55.550 }, 00:10:55.550 "base_bdevs_list": [ 00:10:55.550 { 00:10:55.550 "name": "spare", 00:10:55.550 "uuid": "33012c57-9bb9-5509-9361-f409c1f366b8", 00:10:55.550 "is_configured": true, 00:10:55.550 "data_offset": 0, 00:10:55.550 "data_size": 65536 00:10:55.550 }, 00:10:55.550 { 00:10:55.550 "name": "BaseBdev2", 00:10:55.550 "uuid": "9fd4a399-e608-5cf1-bceb-815c0686358b", 00:10:55.550 "is_configured": true, 00:10:55.550 "data_offset": 0, 00:10:55.550 "data_size": 65536 00:10:55.550 }, 00:10:55.550 { 00:10:55.550 "name": "BaseBdev3", 00:10:55.550 "uuid": "06363e74-b2b5-51de-ab39-b144cfda97be", 00:10:55.550 "is_configured": true, 00:10:55.550 "data_offset": 0, 00:10:55.550 "data_size": 65536 00:10:55.550 }, 00:10:55.550 { 00:10:55.550 "name": "BaseBdev4", 00:10:55.550 "uuid": "2cfdd343-22ba-5276-85e1-793947a92c29", 00:10:55.550 "is_configured": true, 00:10:55.550 "data_offset": 0, 00:10:55.550 "data_size": 65536 00:10:55.550 } 00:10:55.550 ] 00:10:55.550 }' 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.550 14:35:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.550 [2024-10-01 14:35:46.910559] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:55.550 [2024-10-01 14:35:47.005915] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.550 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:55.550 "name": "raid_bdev1", 00:10:55.550 "uuid": "a9e4897e-d093-40ff-9748-93ba54bae251", 00:10:55.550 "strip_size_kb": 0, 00:10:55.550 "state": "online", 00:10:55.550 "raid_level": "raid1", 00:10:55.550 "superblock": false, 00:10:55.550 "num_base_bdevs": 4, 00:10:55.550 "num_base_bdevs_discovered": 3, 00:10:55.550 "num_base_bdevs_operational": 3, 00:10:55.550 "process": { 00:10:55.550 "type": "rebuild", 00:10:55.550 "target": "spare", 00:10:55.550 "progress": { 00:10:55.550 "blocks": 24576, 00:10:55.550 "percent": 37 00:10:55.550 } 00:10:55.550 }, 00:10:55.550 "base_bdevs_list": [ 00:10:55.550 { 00:10:55.550 "name": "spare", 00:10:55.550 "uuid": "33012c57-9bb9-5509-9361-f409c1f366b8", 00:10:55.550 "is_configured": true, 00:10:55.550 "data_offset": 0, 00:10:55.550 "data_size": 65536 00:10:55.551 }, 00:10:55.551 { 00:10:55.551 "name": null, 00:10:55.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.551 "is_configured": false, 00:10:55.551 "data_offset": 0, 00:10:55.551 "data_size": 65536 00:10:55.551 }, 00:10:55.551 { 00:10:55.551 "name": "BaseBdev3", 00:10:55.551 "uuid": "06363e74-b2b5-51de-ab39-b144cfda97be", 00:10:55.551 "is_configured": true, 00:10:55.551 "data_offset": 0, 00:10:55.551 "data_size": 65536 00:10:55.551 }, 00:10:55.551 { 00:10:55.551 "name": "BaseBdev4", 00:10:55.551 "uuid": "2cfdd343-22ba-5276-85e1-793947a92c29", 00:10:55.551 "is_configured": true, 00:10:55.551 "data_offset": 0, 00:10:55.551 "data_size": 65536 00:10:55.551 } 00:10:55.551 ] 00:10:55.551 }' 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=367 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:55.551 "name": "raid_bdev1", 00:10:55.551 "uuid": "a9e4897e-d093-40ff-9748-93ba54bae251", 00:10:55.551 "strip_size_kb": 0, 00:10:55.551 "state": "online", 00:10:55.551 "raid_level": "raid1", 00:10:55.551 "superblock": false, 00:10:55.551 "num_base_bdevs": 4, 00:10:55.551 "num_base_bdevs_discovered": 3, 00:10:55.551 "num_base_bdevs_operational": 3, 00:10:55.551 "process": { 00:10:55.551 "type": "rebuild", 00:10:55.551 "target": "spare", 00:10:55.551 "progress": { 00:10:55.551 "blocks": 24576, 00:10:55.551 "percent": 37 00:10:55.551 } 00:10:55.551 }, 00:10:55.551 "base_bdevs_list": [ 00:10:55.551 { 00:10:55.551 "name": "spare", 00:10:55.551 "uuid": "33012c57-9bb9-5509-9361-f409c1f366b8", 00:10:55.551 "is_configured": true, 00:10:55.551 "data_offset": 0, 00:10:55.551 "data_size": 65536 00:10:55.551 }, 00:10:55.551 { 00:10:55.551 "name": null, 00:10:55.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.551 "is_configured": false, 00:10:55.551 "data_offset": 0, 00:10:55.551 "data_size": 65536 00:10:55.551 }, 00:10:55.551 { 00:10:55.551 "name": "BaseBdev3", 00:10:55.551 "uuid": "06363e74-b2b5-51de-ab39-b144cfda97be", 00:10:55.551 "is_configured": true, 00:10:55.551 "data_offset": 0, 00:10:55.551 "data_size": 65536 00:10:55.551 }, 00:10:55.551 { 00:10:55.551 "name": "BaseBdev4", 00:10:55.551 "uuid": "2cfdd343-22ba-5276-85e1-793947a92c29", 00:10:55.551 "is_configured": true, 00:10:55.551 "data_offset": 0, 00:10:55.551 "data_size": 65536 00:10:55.551 } 00:10:55.551 ] 00:10:55.551 }' 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:55.551 14:35:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:56.920 "name": "raid_bdev1", 00:10:56.920 "uuid": "a9e4897e-d093-40ff-9748-93ba54bae251", 00:10:56.920 "strip_size_kb": 0, 00:10:56.920 "state": "online", 00:10:56.920 "raid_level": "raid1", 00:10:56.920 "superblock": false, 00:10:56.920 "num_base_bdevs": 4, 00:10:56.920 "num_base_bdevs_discovered": 3, 00:10:56.920 "num_base_bdevs_operational": 3, 00:10:56.920 "process": { 00:10:56.920 "type": "rebuild", 00:10:56.920 "target": "spare", 00:10:56.920 "progress": { 00:10:56.920 "blocks": 47104, 00:10:56.920 "percent": 71 00:10:56.920 } 00:10:56.920 }, 00:10:56.920 "base_bdevs_list": [ 00:10:56.920 { 00:10:56.920 "name": "spare", 00:10:56.920 "uuid": "33012c57-9bb9-5509-9361-f409c1f366b8", 00:10:56.920 "is_configured": true, 00:10:56.920 "data_offset": 0, 00:10:56.920 "data_size": 65536 00:10:56.920 }, 00:10:56.920 { 00:10:56.920 "name": null, 00:10:56.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.920 "is_configured": false, 00:10:56.920 "data_offset": 0, 00:10:56.920 "data_size": 65536 00:10:56.920 }, 00:10:56.920 { 00:10:56.920 "name": "BaseBdev3", 00:10:56.920 "uuid": "06363e74-b2b5-51de-ab39-b144cfda97be", 00:10:56.920 "is_configured": true, 00:10:56.920 "data_offset": 0, 00:10:56.920 "data_size": 65536 00:10:56.920 }, 00:10:56.920 { 00:10:56.920 "name": "BaseBdev4", 00:10:56.920 "uuid": "2cfdd343-22ba-5276-85e1-793947a92c29", 00:10:56.920 "is_configured": true, 00:10:56.920 "data_offset": 0, 00:10:56.920 "data_size": 65536 00:10:56.920 } 00:10:56.920 ] 00:10:56.920 }' 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:56.920 14:35:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:57.487 [2024-10-01 14:35:49.015072] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:57.487 [2024-10-01 14:35:49.015134] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:57.487 [2024-10-01 14:35:49.015174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:57.745 "name": "raid_bdev1", 00:10:57.745 "uuid": "a9e4897e-d093-40ff-9748-93ba54bae251", 00:10:57.745 "strip_size_kb": 0, 00:10:57.745 "state": "online", 00:10:57.745 "raid_level": "raid1", 00:10:57.745 "superblock": false, 00:10:57.745 "num_base_bdevs": 4, 00:10:57.745 "num_base_bdevs_discovered": 3, 00:10:57.745 "num_base_bdevs_operational": 3, 00:10:57.745 "base_bdevs_list": [ 00:10:57.745 { 00:10:57.745 "name": "spare", 00:10:57.745 "uuid": "33012c57-9bb9-5509-9361-f409c1f366b8", 00:10:57.745 "is_configured": true, 00:10:57.745 "data_offset": 0, 00:10:57.745 "data_size": 65536 00:10:57.745 }, 00:10:57.745 { 00:10:57.745 "name": null, 00:10:57.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.745 "is_configured": false, 00:10:57.745 "data_offset": 0, 00:10:57.745 "data_size": 65536 00:10:57.745 }, 00:10:57.745 { 00:10:57.745 "name": "BaseBdev3", 00:10:57.745 "uuid": "06363e74-b2b5-51de-ab39-b144cfda97be", 00:10:57.745 "is_configured": true, 00:10:57.745 "data_offset": 0, 00:10:57.745 "data_size": 65536 00:10:57.745 }, 00:10:57.745 { 00:10:57.745 "name": "BaseBdev4", 00:10:57.745 "uuid": "2cfdd343-22ba-5276-85e1-793947a92c29", 00:10:57.745 "is_configured": true, 00:10:57.745 "data_offset": 0, 00:10:57.745 "data_size": 65536 00:10:57.745 } 00:10:57.745 ] 00:10:57.745 }' 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.745 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:57.745 "name": "raid_bdev1", 00:10:57.745 "uuid": "a9e4897e-d093-40ff-9748-93ba54bae251", 00:10:57.745 "strip_size_kb": 0, 00:10:57.745 "state": "online", 00:10:57.745 "raid_level": "raid1", 00:10:57.745 "superblock": false, 00:10:57.745 "num_base_bdevs": 4, 00:10:57.745 "num_base_bdevs_discovered": 3, 00:10:57.745 "num_base_bdevs_operational": 3, 00:10:57.745 "base_bdevs_list": [ 00:10:57.745 { 00:10:57.745 "name": "spare", 00:10:57.745 "uuid": "33012c57-9bb9-5509-9361-f409c1f366b8", 00:10:57.745 "is_configured": true, 00:10:57.745 "data_offset": 0, 00:10:57.745 "data_size": 65536 00:10:57.745 }, 00:10:57.745 { 00:10:57.745 "name": null, 00:10:57.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.745 "is_configured": false, 00:10:57.745 "data_offset": 0, 00:10:57.745 "data_size": 65536 00:10:57.745 }, 00:10:57.745 { 00:10:57.745 "name": "BaseBdev3", 00:10:57.745 "uuid": "06363e74-b2b5-51de-ab39-b144cfda97be", 00:10:57.746 "is_configured": true, 00:10:57.746 "data_offset": 0, 00:10:57.746 "data_size": 65536 00:10:57.746 }, 00:10:57.746 { 00:10:57.746 "name": "BaseBdev4", 00:10:57.746 "uuid": "2cfdd343-22ba-5276-85e1-793947a92c29", 00:10:57.746 "is_configured": true, 00:10:57.746 "data_offset": 0, 00:10:57.746 "data_size": 65536 00:10:57.746 } 00:10:57.746 ] 00:10:57.746 }' 00:10:57.746 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.003 "name": "raid_bdev1", 00:10:58.003 "uuid": "a9e4897e-d093-40ff-9748-93ba54bae251", 00:10:58.003 "strip_size_kb": 0, 00:10:58.003 "state": "online", 00:10:58.003 "raid_level": "raid1", 00:10:58.003 "superblock": false, 00:10:58.003 "num_base_bdevs": 4, 00:10:58.003 "num_base_bdevs_discovered": 3, 00:10:58.003 "num_base_bdevs_operational": 3, 00:10:58.003 "base_bdevs_list": [ 00:10:58.003 { 00:10:58.003 "name": "spare", 00:10:58.003 "uuid": "33012c57-9bb9-5509-9361-f409c1f366b8", 00:10:58.003 "is_configured": true, 00:10:58.003 "data_offset": 0, 00:10:58.003 "data_size": 65536 00:10:58.003 }, 00:10:58.003 { 00:10:58.003 "name": null, 00:10:58.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.003 "is_configured": false, 00:10:58.003 "data_offset": 0, 00:10:58.003 "data_size": 65536 00:10:58.003 }, 00:10:58.003 { 00:10:58.003 "name": "BaseBdev3", 00:10:58.003 "uuid": "06363e74-b2b5-51de-ab39-b144cfda97be", 00:10:58.003 "is_configured": true, 00:10:58.003 "data_offset": 0, 00:10:58.003 "data_size": 65536 00:10:58.003 }, 00:10:58.003 { 00:10:58.003 "name": "BaseBdev4", 00:10:58.003 "uuid": "2cfdd343-22ba-5276-85e1-793947a92c29", 00:10:58.003 "is_configured": true, 00:10:58.003 "data_offset": 0, 00:10:58.003 "data_size": 65536 00:10:58.003 } 00:10:58.003 ] 00:10:58.003 }' 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.003 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.261 [2024-10-01 14:35:49.815336] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:58.261 [2024-10-01 14:35:49.815365] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.261 [2024-10-01 14:35:49.815429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.261 [2024-10-01 14:35:49.815498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.261 [2024-10-01 14:35:49.815506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:58.261 14:35:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:58.521 /dev/nbd0 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:58.521 1+0 records in 00:10:58.521 1+0 records out 00:10:58.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257221 s, 15.9 MB/s 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:58.521 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:58.781 /dev/nbd1 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:58.781 1+0 records in 00:10:58.781 1+0 records out 00:10:58.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296946 s, 13.8 MB/s 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:58.781 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.782 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:58.782 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:58.782 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.782 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:58.782 14:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:10:58.782 14:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:58.782 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:58.782 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.782 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:58.782 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:58.782 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:58.782 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:59.040 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:59.040 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:59.040 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:59.040 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.040 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.040 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:59.040 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:59.040 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.040 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.040 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75665 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75665 ']' 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75665 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75665 00:10:59.299 killing process with pid 75665 00:10:59.299 Received shutdown signal, test time was about 60.000000 seconds 00:10:59.299 00:10:59.299 Latency(us) 00:10:59.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.299 =================================================================================================================== 00:10:59.299 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75665' 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75665 00:10:59.299 [2024-10-01 14:35:50.893375] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.299 14:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75665 00:10:59.557 [2024-10-01 14:35:51.138803] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:00.489 00:11:00.489 real 0m15.439s 00:11:00.489 user 0m16.792s 00:11:00.489 sys 0m2.641s 00:11:00.489 ************************************ 00:11:00.489 END TEST raid_rebuild_test 00:11:00.489 ************************************ 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.489 14:35:51 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:11:00.489 14:35:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:00.489 14:35:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.489 14:35:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.489 ************************************ 00:11:00.489 START TEST raid_rebuild_test_sb 00:11:00.489 ************************************ 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76091 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76091 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 76091 ']' 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.489 14:35:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:00.489 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:00.489 Zero copy mechanism will not be used. 00:11:00.489 [2024-10-01 14:35:51.920749] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:11:00.489 [2024-10-01 14:35:51.920866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76091 ] 00:11:00.489 [2024-10-01 14:35:52.071800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.747 [2024-10-01 14:35:52.263450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.747 [2024-10-01 14:35:52.402150] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.747 [2024-10-01 14:35:52.402191] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 BaseBdev1_malloc 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 [2024-10-01 14:35:52.817189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:01.315 [2024-10-01 14:35:52.817405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.315 [2024-10-01 14:35:52.817434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:01.315 [2024-10-01 14:35:52.817449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.315 [2024-10-01 14:35:52.819656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.315 [2024-10-01 14:35:52.819698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:01.315 BaseBdev1 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 BaseBdev2_malloc 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 [2024-10-01 14:35:52.868291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:01.315 [2024-10-01 14:35:52.868354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.315 [2024-10-01 14:35:52.868374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:01.315 [2024-10-01 14:35:52.868384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.315 [2024-10-01 14:35:52.870485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.315 [2024-10-01 14:35:52.870641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:01.315 BaseBdev2 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.315 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 BaseBdev3_malloc 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.316 [2024-10-01 14:35:52.904090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:01.316 [2024-10-01 14:35:52.904142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.316 [2024-10-01 14:35:52.904161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:01.316 [2024-10-01 14:35:52.904172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.316 [2024-10-01 14:35:52.906259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.316 [2024-10-01 14:35:52.906296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:01.316 BaseBdev3 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.316 BaseBdev4_malloc 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.316 [2024-10-01 14:35:52.942409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:01.316 [2024-10-01 14:35:52.942467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.316 [2024-10-01 14:35:52.942484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:01.316 [2024-10-01 14:35:52.942494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.316 [2024-10-01 14:35:52.944834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.316 [2024-10-01 14:35:52.944988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:01.316 BaseBdev4 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.316 spare_malloc 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.316 spare_delay 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.316 [2024-10-01 14:35:52.993181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:01.316 [2024-10-01 14:35:52.993238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.316 [2024-10-01 14:35:52.993256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:01.316 [2024-10-01 14:35:52.993266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.316 [2024-10-01 14:35:52.995567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.316 [2024-10-01 14:35:52.995617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:01.316 spare 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.316 14:35:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.612 [2024-10-01 14:35:53.001256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.612 [2024-10-01 14:35:53.003306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.612 [2024-10-01 14:35:53.003386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.612 [2024-10-01 14:35:53.003450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:01.612 [2024-10-01 14:35:53.003665] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:01.612 [2024-10-01 14:35:53.003678] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:01.612 [2024-10-01 14:35:53.003988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:01.612 [2024-10-01 14:35:53.004148] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:01.612 [2024-10-01 14:35:53.004158] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:01.612 [2024-10-01 14:35:53.004304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.612 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.612 "name": "raid_bdev1", 00:11:01.612 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:01.612 "strip_size_kb": 0, 00:11:01.612 "state": "online", 00:11:01.612 "raid_level": "raid1", 00:11:01.612 "superblock": true, 00:11:01.612 "num_base_bdevs": 4, 00:11:01.612 "num_base_bdevs_discovered": 4, 00:11:01.612 "num_base_bdevs_operational": 4, 00:11:01.612 "base_bdevs_list": [ 00:11:01.612 { 00:11:01.612 "name": "BaseBdev1", 00:11:01.612 "uuid": "54129399-1536-58ff-8d29-e268aa119706", 00:11:01.612 "is_configured": true, 00:11:01.612 "data_offset": 2048, 00:11:01.612 "data_size": 63488 00:11:01.612 }, 00:11:01.612 { 00:11:01.612 "name": "BaseBdev2", 00:11:01.612 "uuid": "798deb30-1877-5353-acc4-d0368383d416", 00:11:01.612 "is_configured": true, 00:11:01.612 "data_offset": 2048, 00:11:01.612 "data_size": 63488 00:11:01.612 }, 00:11:01.612 { 00:11:01.612 "name": "BaseBdev3", 00:11:01.612 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:01.612 "is_configured": true, 00:11:01.612 "data_offset": 2048, 00:11:01.612 "data_size": 63488 00:11:01.612 }, 00:11:01.612 { 00:11:01.612 "name": "BaseBdev4", 00:11:01.612 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:01.612 "is_configured": true, 00:11:01.613 "data_offset": 2048, 00:11:01.613 "data_size": 63488 00:11:01.613 } 00:11:01.613 ] 00:11:01.613 }' 00:11:01.613 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.613 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.870 [2024-10-01 14:35:53.333635] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:01.870 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:02.128 [2024-10-01 14:35:53.581384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:02.128 /dev/nbd0 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:02.128 1+0 records in 00:11:02.128 1+0 records out 00:11:02.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327472 s, 12.5 MB/s 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:02.128 14:35:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:08.690 63488+0 records in 00:11:08.690 63488+0 records out 00:11:08.690 32505856 bytes (33 MB, 31 MiB) copied, 5.93696 s, 5.5 MB/s 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:08.690 [2024-10-01 14:35:59.744620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.690 [2024-10-01 14:35:59.752693] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.690 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.691 "name": "raid_bdev1", 00:11:08.691 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:08.691 "strip_size_kb": 0, 00:11:08.691 "state": "online", 00:11:08.691 "raid_level": "raid1", 00:11:08.691 "superblock": true, 00:11:08.691 "num_base_bdevs": 4, 00:11:08.691 "num_base_bdevs_discovered": 3, 00:11:08.691 "num_base_bdevs_operational": 3, 00:11:08.691 "base_bdevs_list": [ 00:11:08.691 { 00:11:08.691 "name": null, 00:11:08.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.691 "is_configured": false, 00:11:08.691 "data_offset": 0, 00:11:08.691 "data_size": 63488 00:11:08.691 }, 00:11:08.691 { 00:11:08.691 "name": "BaseBdev2", 00:11:08.691 "uuid": "798deb30-1877-5353-acc4-d0368383d416", 00:11:08.691 "is_configured": true, 00:11:08.691 "data_offset": 2048, 00:11:08.691 "data_size": 63488 00:11:08.691 }, 00:11:08.691 { 00:11:08.691 "name": "BaseBdev3", 00:11:08.691 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:08.691 "is_configured": true, 00:11:08.691 "data_offset": 2048, 00:11:08.691 "data_size": 63488 00:11:08.691 }, 00:11:08.691 { 00:11:08.691 "name": "BaseBdev4", 00:11:08.691 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:08.691 "is_configured": true, 00:11:08.691 "data_offset": 2048, 00:11:08.691 "data_size": 63488 00:11:08.691 } 00:11:08.691 ] 00:11:08.691 }' 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.691 14:35:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.691 14:36:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:08.691 14:36:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.691 14:36:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.691 [2024-10-01 14:36:00.072761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:08.691 [2024-10-01 14:36:00.080760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:11:08.691 14:36:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.691 14:36:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:08.691 [2024-10-01 14:36:00.082359] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.631 "name": "raid_bdev1", 00:11:09.631 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:09.631 "strip_size_kb": 0, 00:11:09.631 "state": "online", 00:11:09.631 "raid_level": "raid1", 00:11:09.631 "superblock": true, 00:11:09.631 "num_base_bdevs": 4, 00:11:09.631 "num_base_bdevs_discovered": 4, 00:11:09.631 "num_base_bdevs_operational": 4, 00:11:09.631 "process": { 00:11:09.631 "type": "rebuild", 00:11:09.631 "target": "spare", 00:11:09.631 "progress": { 00:11:09.631 "blocks": 20480, 00:11:09.631 "percent": 32 00:11:09.631 } 00:11:09.631 }, 00:11:09.631 "base_bdevs_list": [ 00:11:09.631 { 00:11:09.631 "name": "spare", 00:11:09.631 "uuid": "3da93670-9665-587d-9d19-5d4a4bd16a22", 00:11:09.631 "is_configured": true, 00:11:09.631 "data_offset": 2048, 00:11:09.631 "data_size": 63488 00:11:09.631 }, 00:11:09.631 { 00:11:09.631 "name": "BaseBdev2", 00:11:09.631 "uuid": "798deb30-1877-5353-acc4-d0368383d416", 00:11:09.631 "is_configured": true, 00:11:09.631 "data_offset": 2048, 00:11:09.631 "data_size": 63488 00:11:09.631 }, 00:11:09.631 { 00:11:09.631 "name": "BaseBdev3", 00:11:09.631 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:09.631 "is_configured": true, 00:11:09.631 "data_offset": 2048, 00:11:09.631 "data_size": 63488 00:11:09.631 }, 00:11:09.631 { 00:11:09.631 "name": "BaseBdev4", 00:11:09.631 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:09.631 "is_configured": true, 00:11:09.631 "data_offset": 2048, 00:11:09.631 "data_size": 63488 00:11:09.631 } 00:11:09.631 ] 00:11:09.631 }' 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.631 [2024-10-01 14:36:01.184805] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:09.631 [2024-10-01 14:36:01.187734] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:09.631 [2024-10-01 14:36:01.187790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.631 [2024-10-01 14:36:01.187804] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:09.631 [2024-10-01 14:36:01.187812] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.631 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.631 "name": "raid_bdev1", 00:11:09.631 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:09.631 "strip_size_kb": 0, 00:11:09.631 "state": "online", 00:11:09.631 "raid_level": "raid1", 00:11:09.631 "superblock": true, 00:11:09.631 "num_base_bdevs": 4, 00:11:09.631 "num_base_bdevs_discovered": 3, 00:11:09.631 "num_base_bdevs_operational": 3, 00:11:09.631 "base_bdevs_list": [ 00:11:09.631 { 00:11:09.631 "name": null, 00:11:09.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.631 "is_configured": false, 00:11:09.631 "data_offset": 0, 00:11:09.631 "data_size": 63488 00:11:09.631 }, 00:11:09.631 { 00:11:09.631 "name": "BaseBdev2", 00:11:09.631 "uuid": "798deb30-1877-5353-acc4-d0368383d416", 00:11:09.631 "is_configured": true, 00:11:09.631 "data_offset": 2048, 00:11:09.631 "data_size": 63488 00:11:09.631 }, 00:11:09.631 { 00:11:09.631 "name": "BaseBdev3", 00:11:09.631 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:09.631 "is_configured": true, 00:11:09.631 "data_offset": 2048, 00:11:09.631 "data_size": 63488 00:11:09.631 }, 00:11:09.631 { 00:11:09.631 "name": "BaseBdev4", 00:11:09.632 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:09.632 "is_configured": true, 00:11:09.632 "data_offset": 2048, 00:11:09.632 "data_size": 63488 00:11:09.632 } 00:11:09.632 ] 00:11:09.632 }' 00:11:09.632 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.632 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.889 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:09.889 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.889 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:09.889 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:09.889 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.889 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.889 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.889 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.889 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.889 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.889 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.889 "name": "raid_bdev1", 00:11:09.889 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:09.889 "strip_size_kb": 0, 00:11:09.889 "state": "online", 00:11:09.889 "raid_level": "raid1", 00:11:09.889 "superblock": true, 00:11:09.889 "num_base_bdevs": 4, 00:11:09.889 "num_base_bdevs_discovered": 3, 00:11:09.889 "num_base_bdevs_operational": 3, 00:11:09.889 "base_bdevs_list": [ 00:11:09.889 { 00:11:09.889 "name": null, 00:11:09.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.889 "is_configured": false, 00:11:09.889 "data_offset": 0, 00:11:09.889 "data_size": 63488 00:11:09.889 }, 00:11:09.889 { 00:11:09.889 "name": "BaseBdev2", 00:11:09.889 "uuid": "798deb30-1877-5353-acc4-d0368383d416", 00:11:09.889 "is_configured": true, 00:11:09.889 "data_offset": 2048, 00:11:09.889 "data_size": 63488 00:11:09.889 }, 00:11:09.890 { 00:11:09.890 "name": "BaseBdev3", 00:11:09.890 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:09.890 "is_configured": true, 00:11:09.890 "data_offset": 2048, 00:11:09.890 "data_size": 63488 00:11:09.890 }, 00:11:09.890 { 00:11:09.890 "name": "BaseBdev4", 00:11:09.890 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:09.890 "is_configured": true, 00:11:09.890 "data_offset": 2048, 00:11:09.890 "data_size": 63488 00:11:09.890 } 00:11:09.890 ] 00:11:09.890 }' 00:11:09.890 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:10.148 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:10.148 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:10.148 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:10.148 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:10.148 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.148 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.148 [2024-10-01 14:36:01.620131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:10.148 [2024-10-01 14:36:01.627624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:11:10.148 14:36:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.148 14:36:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:10.148 [2024-10-01 14:36:01.629409] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.080 "name": "raid_bdev1", 00:11:11.080 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:11.080 "strip_size_kb": 0, 00:11:11.080 "state": "online", 00:11:11.080 "raid_level": "raid1", 00:11:11.080 "superblock": true, 00:11:11.080 "num_base_bdevs": 4, 00:11:11.080 "num_base_bdevs_discovered": 4, 00:11:11.080 "num_base_bdevs_operational": 4, 00:11:11.080 "process": { 00:11:11.080 "type": "rebuild", 00:11:11.080 "target": "spare", 00:11:11.080 "progress": { 00:11:11.080 "blocks": 20480, 00:11:11.080 "percent": 32 00:11:11.080 } 00:11:11.080 }, 00:11:11.080 "base_bdevs_list": [ 00:11:11.080 { 00:11:11.080 "name": "spare", 00:11:11.080 "uuid": "3da93670-9665-587d-9d19-5d4a4bd16a22", 00:11:11.080 "is_configured": true, 00:11:11.080 "data_offset": 2048, 00:11:11.080 "data_size": 63488 00:11:11.080 }, 00:11:11.080 { 00:11:11.080 "name": "BaseBdev2", 00:11:11.080 "uuid": "798deb30-1877-5353-acc4-d0368383d416", 00:11:11.080 "is_configured": true, 00:11:11.080 "data_offset": 2048, 00:11:11.080 "data_size": 63488 00:11:11.080 }, 00:11:11.080 { 00:11:11.080 "name": "BaseBdev3", 00:11:11.080 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:11.080 "is_configured": true, 00:11:11.080 "data_offset": 2048, 00:11:11.080 "data_size": 63488 00:11:11.080 }, 00:11:11.080 { 00:11:11.080 "name": "BaseBdev4", 00:11:11.080 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:11.080 "is_configured": true, 00:11:11.080 "data_offset": 2048, 00:11:11.080 "data_size": 63488 00:11:11.080 } 00:11:11.080 ] 00:11:11.080 }' 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:11.080 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.080 14:36:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.080 [2024-10-01 14:36:02.727855] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:11.338 [2024-10-01 14:36:02.834874] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.338 "name": "raid_bdev1", 00:11:11.338 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:11.338 "strip_size_kb": 0, 00:11:11.338 "state": "online", 00:11:11.338 "raid_level": "raid1", 00:11:11.338 "superblock": true, 00:11:11.338 "num_base_bdevs": 4, 00:11:11.338 "num_base_bdevs_discovered": 3, 00:11:11.338 "num_base_bdevs_operational": 3, 00:11:11.338 "process": { 00:11:11.338 "type": "rebuild", 00:11:11.338 "target": "spare", 00:11:11.338 "progress": { 00:11:11.338 "blocks": 22528, 00:11:11.338 "percent": 35 00:11:11.338 } 00:11:11.338 }, 00:11:11.338 "base_bdevs_list": [ 00:11:11.338 { 00:11:11.338 "name": "spare", 00:11:11.338 "uuid": "3da93670-9665-587d-9d19-5d4a4bd16a22", 00:11:11.338 "is_configured": true, 00:11:11.338 "data_offset": 2048, 00:11:11.338 "data_size": 63488 00:11:11.338 }, 00:11:11.338 { 00:11:11.338 "name": null, 00:11:11.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.338 "is_configured": false, 00:11:11.338 "data_offset": 0, 00:11:11.338 "data_size": 63488 00:11:11.338 }, 00:11:11.338 { 00:11:11.338 "name": "BaseBdev3", 00:11:11.338 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:11.338 "is_configured": true, 00:11:11.338 "data_offset": 2048, 00:11:11.338 "data_size": 63488 00:11:11.338 }, 00:11:11.338 { 00:11:11.338 "name": "BaseBdev4", 00:11:11.338 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:11.338 "is_configured": true, 00:11:11.338 "data_offset": 2048, 00:11:11.338 "data_size": 63488 00:11:11.338 } 00:11:11.338 ] 00:11:11.338 }' 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=382 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:11.338 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.339 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.339 14:36:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.339 14:36:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.339 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.339 14:36:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.339 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.339 "name": "raid_bdev1", 00:11:11.339 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:11.339 "strip_size_kb": 0, 00:11:11.339 "state": "online", 00:11:11.339 "raid_level": "raid1", 00:11:11.339 "superblock": true, 00:11:11.339 "num_base_bdevs": 4, 00:11:11.339 "num_base_bdevs_discovered": 3, 00:11:11.339 "num_base_bdevs_operational": 3, 00:11:11.339 "process": { 00:11:11.339 "type": "rebuild", 00:11:11.339 "target": "spare", 00:11:11.339 "progress": { 00:11:11.339 "blocks": 24576, 00:11:11.339 "percent": 38 00:11:11.339 } 00:11:11.339 }, 00:11:11.339 "base_bdevs_list": [ 00:11:11.339 { 00:11:11.339 "name": "spare", 00:11:11.339 "uuid": "3da93670-9665-587d-9d19-5d4a4bd16a22", 00:11:11.339 "is_configured": true, 00:11:11.339 "data_offset": 2048, 00:11:11.339 "data_size": 63488 00:11:11.339 }, 00:11:11.339 { 00:11:11.339 "name": null, 00:11:11.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.339 "is_configured": false, 00:11:11.339 "data_offset": 0, 00:11:11.339 "data_size": 63488 00:11:11.339 }, 00:11:11.339 { 00:11:11.339 "name": "BaseBdev3", 00:11:11.339 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:11.339 "is_configured": true, 00:11:11.339 "data_offset": 2048, 00:11:11.339 "data_size": 63488 00:11:11.339 }, 00:11:11.339 { 00:11:11.339 "name": "BaseBdev4", 00:11:11.339 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:11.339 "is_configured": true, 00:11:11.339 "data_offset": 2048, 00:11:11.339 "data_size": 63488 00:11:11.339 } 00:11:11.339 ] 00:11:11.339 }' 00:11:11.339 14:36:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.596 14:36:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:11.596 14:36:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:11.596 14:36:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:11.597 14:36:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:12.528 "name": "raid_bdev1", 00:11:12.528 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:12.528 "strip_size_kb": 0, 00:11:12.528 "state": "online", 00:11:12.528 "raid_level": "raid1", 00:11:12.528 "superblock": true, 00:11:12.528 "num_base_bdevs": 4, 00:11:12.528 "num_base_bdevs_discovered": 3, 00:11:12.528 "num_base_bdevs_operational": 3, 00:11:12.528 "process": { 00:11:12.528 "type": "rebuild", 00:11:12.528 "target": "spare", 00:11:12.528 "progress": { 00:11:12.528 "blocks": 47104, 00:11:12.528 "percent": 74 00:11:12.528 } 00:11:12.528 }, 00:11:12.528 "base_bdevs_list": [ 00:11:12.528 { 00:11:12.528 "name": "spare", 00:11:12.528 "uuid": "3da93670-9665-587d-9d19-5d4a4bd16a22", 00:11:12.528 "is_configured": true, 00:11:12.528 "data_offset": 2048, 00:11:12.528 "data_size": 63488 00:11:12.528 }, 00:11:12.528 { 00:11:12.528 "name": null, 00:11:12.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.528 "is_configured": false, 00:11:12.528 "data_offset": 0, 00:11:12.528 "data_size": 63488 00:11:12.528 }, 00:11:12.528 { 00:11:12.528 "name": "BaseBdev3", 00:11:12.528 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:12.528 "is_configured": true, 00:11:12.528 "data_offset": 2048, 00:11:12.528 "data_size": 63488 00:11:12.528 }, 00:11:12.528 { 00:11:12.528 "name": "BaseBdev4", 00:11:12.528 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:12.528 "is_configured": true, 00:11:12.528 "data_offset": 2048, 00:11:12.528 "data_size": 63488 00:11:12.528 } 00:11:12.528 ] 00:11:12.528 }' 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:12.528 14:36:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:13.461 [2024-10-01 14:36:04.844967] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:13.461 [2024-10-01 14:36:04.845042] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:13.461 [2024-10-01 14:36:04.845157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.719 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:13.719 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:13.719 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:13.719 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:13.719 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:13.719 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:13.719 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.719 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.719 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.719 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.719 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.719 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:13.719 "name": "raid_bdev1", 00:11:13.719 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:13.719 "strip_size_kb": 0, 00:11:13.719 "state": "online", 00:11:13.719 "raid_level": "raid1", 00:11:13.719 "superblock": true, 00:11:13.719 "num_base_bdevs": 4, 00:11:13.719 "num_base_bdevs_discovered": 3, 00:11:13.719 "num_base_bdevs_operational": 3, 00:11:13.719 "base_bdevs_list": [ 00:11:13.719 { 00:11:13.719 "name": "spare", 00:11:13.719 "uuid": "3da93670-9665-587d-9d19-5d4a4bd16a22", 00:11:13.719 "is_configured": true, 00:11:13.719 "data_offset": 2048, 00:11:13.719 "data_size": 63488 00:11:13.719 }, 00:11:13.719 { 00:11:13.719 "name": null, 00:11:13.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.719 "is_configured": false, 00:11:13.719 "data_offset": 0, 00:11:13.719 "data_size": 63488 00:11:13.719 }, 00:11:13.719 { 00:11:13.719 "name": "BaseBdev3", 00:11:13.719 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:13.719 "is_configured": true, 00:11:13.719 "data_offset": 2048, 00:11:13.719 "data_size": 63488 00:11:13.719 }, 00:11:13.719 { 00:11:13.719 "name": "BaseBdev4", 00:11:13.719 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:13.719 "is_configured": true, 00:11:13.719 "data_offset": 2048, 00:11:13.719 "data_size": 63488 00:11:13.719 } 00:11:13.719 ] 00:11:13.719 }' 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:13.720 "name": "raid_bdev1", 00:11:13.720 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:13.720 "strip_size_kb": 0, 00:11:13.720 "state": "online", 00:11:13.720 "raid_level": "raid1", 00:11:13.720 "superblock": true, 00:11:13.720 "num_base_bdevs": 4, 00:11:13.720 "num_base_bdevs_discovered": 3, 00:11:13.720 "num_base_bdevs_operational": 3, 00:11:13.720 "base_bdevs_list": [ 00:11:13.720 { 00:11:13.720 "name": "spare", 00:11:13.720 "uuid": "3da93670-9665-587d-9d19-5d4a4bd16a22", 00:11:13.720 "is_configured": true, 00:11:13.720 "data_offset": 2048, 00:11:13.720 "data_size": 63488 00:11:13.720 }, 00:11:13.720 { 00:11:13.720 "name": null, 00:11:13.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.720 "is_configured": false, 00:11:13.720 "data_offset": 0, 00:11:13.720 "data_size": 63488 00:11:13.720 }, 00:11:13.720 { 00:11:13.720 "name": "BaseBdev3", 00:11:13.720 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:13.720 "is_configured": true, 00:11:13.720 "data_offset": 2048, 00:11:13.720 "data_size": 63488 00:11:13.720 }, 00:11:13.720 { 00:11:13.720 "name": "BaseBdev4", 00:11:13.720 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:13.720 "is_configured": true, 00:11:13.720 "data_offset": 2048, 00:11:13.720 "data_size": 63488 00:11:13.720 } 00:11:13.720 ] 00:11:13.720 }' 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.720 "name": "raid_bdev1", 00:11:13.720 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:13.720 "strip_size_kb": 0, 00:11:13.720 "state": "online", 00:11:13.720 "raid_level": "raid1", 00:11:13.720 "superblock": true, 00:11:13.720 "num_base_bdevs": 4, 00:11:13.720 "num_base_bdevs_discovered": 3, 00:11:13.720 "num_base_bdevs_operational": 3, 00:11:13.720 "base_bdevs_list": [ 00:11:13.720 { 00:11:13.720 "name": "spare", 00:11:13.720 "uuid": "3da93670-9665-587d-9d19-5d4a4bd16a22", 00:11:13.720 "is_configured": true, 00:11:13.720 "data_offset": 2048, 00:11:13.720 "data_size": 63488 00:11:13.720 }, 00:11:13.720 { 00:11:13.720 "name": null, 00:11:13.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.720 "is_configured": false, 00:11:13.720 "data_offset": 0, 00:11:13.720 "data_size": 63488 00:11:13.720 }, 00:11:13.720 { 00:11:13.720 "name": "BaseBdev3", 00:11:13.720 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:13.720 "is_configured": true, 00:11:13.720 "data_offset": 2048, 00:11:13.720 "data_size": 63488 00:11:13.720 }, 00:11:13.720 { 00:11:13.720 "name": "BaseBdev4", 00:11:13.720 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:13.720 "is_configured": true, 00:11:13.720 "data_offset": 2048, 00:11:13.720 "data_size": 63488 00:11:13.720 } 00:11:13.720 ] 00:11:13.720 }' 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.720 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.284 [2024-10-01 14:36:05.705487] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.284 [2024-10-01 14:36:05.705518] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.284 [2024-10-01 14:36:05.705582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.284 [2024-10-01 14:36:05.705662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.284 [2024-10-01 14:36:05.705671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:14.284 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:14.284 /dev/nbd0 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:14.542 1+0 records in 00:11:14.542 1+0 records out 00:11:14.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302114 s, 13.6 MB/s 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:14.542 14:36:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:14.542 /dev/nbd1 00:11:14.542 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:14.838 1+0 records in 00:11:14.838 1+0 records out 00:11:14.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235023 s, 17.4 MB/s 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.838 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:15.096 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:15.096 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:15.096 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:15.096 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.096 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.096 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:15.096 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:15.096 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.096 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.096 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:15.353 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:15.353 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:15.353 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.354 [2024-10-01 14:36:06.857244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:15.354 [2024-10-01 14:36:06.857293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.354 [2024-10-01 14:36:06.857314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:15.354 [2024-10-01 14:36:06.857322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.354 [2024-10-01 14:36:06.859213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.354 [2024-10-01 14:36:06.859246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:15.354 [2024-10-01 14:36:06.859325] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:15.354 [2024-10-01 14:36:06.859362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:15.354 spare 00:11:15.354 [2024-10-01 14:36:06.859471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.354 [2024-10-01 14:36:06.859549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.354 [2024-10-01 14:36:06.959638] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:15.354 [2024-10-01 14:36:06.959676] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:15.354 [2024-10-01 14:36:06.959979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:11:15.354 [2024-10-01 14:36:06.960129] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:15.354 [2024-10-01 14:36:06.960142] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:11:15.354 [2024-10-01 14:36:06.960285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.354 "name": "raid_bdev1", 00:11:15.354 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:15.354 "strip_size_kb": 0, 00:11:15.354 "state": "online", 00:11:15.354 "raid_level": "raid1", 00:11:15.354 "superblock": true, 00:11:15.354 "num_base_bdevs": 4, 00:11:15.354 "num_base_bdevs_discovered": 3, 00:11:15.354 "num_base_bdevs_operational": 3, 00:11:15.354 "base_bdevs_list": [ 00:11:15.354 { 00:11:15.354 "name": "spare", 00:11:15.354 "uuid": "3da93670-9665-587d-9d19-5d4a4bd16a22", 00:11:15.354 "is_configured": true, 00:11:15.354 "data_offset": 2048, 00:11:15.354 "data_size": 63488 00:11:15.354 }, 00:11:15.354 { 00:11:15.354 "name": null, 00:11:15.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.354 "is_configured": false, 00:11:15.354 "data_offset": 2048, 00:11:15.354 "data_size": 63488 00:11:15.354 }, 00:11:15.354 { 00:11:15.354 "name": "BaseBdev3", 00:11:15.354 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:15.354 "is_configured": true, 00:11:15.354 "data_offset": 2048, 00:11:15.354 "data_size": 63488 00:11:15.354 }, 00:11:15.354 { 00:11:15.354 "name": "BaseBdev4", 00:11:15.354 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:15.354 "is_configured": true, 00:11:15.354 "data_offset": 2048, 00:11:15.354 "data_size": 63488 00:11:15.354 } 00:11:15.354 ] 00:11:15.354 }' 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.354 14:36:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.916 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:15.916 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:15.916 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:15.916 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:15.916 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:15.916 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.916 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.916 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.916 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.916 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.916 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:15.916 "name": "raid_bdev1", 00:11:15.916 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:15.916 "strip_size_kb": 0, 00:11:15.916 "state": "online", 00:11:15.916 "raid_level": "raid1", 00:11:15.916 "superblock": true, 00:11:15.916 "num_base_bdevs": 4, 00:11:15.916 "num_base_bdevs_discovered": 3, 00:11:15.916 "num_base_bdevs_operational": 3, 00:11:15.916 "base_bdevs_list": [ 00:11:15.916 { 00:11:15.916 "name": "spare", 00:11:15.916 "uuid": "3da93670-9665-587d-9d19-5d4a4bd16a22", 00:11:15.916 "is_configured": true, 00:11:15.916 "data_offset": 2048, 00:11:15.916 "data_size": 63488 00:11:15.916 }, 00:11:15.916 { 00:11:15.916 "name": null, 00:11:15.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.916 "is_configured": false, 00:11:15.916 "data_offset": 2048, 00:11:15.916 "data_size": 63488 00:11:15.916 }, 00:11:15.916 { 00:11:15.916 "name": "BaseBdev3", 00:11:15.916 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:15.916 "is_configured": true, 00:11:15.916 "data_offset": 2048, 00:11:15.916 "data_size": 63488 00:11:15.916 }, 00:11:15.916 { 00:11:15.916 "name": "BaseBdev4", 00:11:15.916 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:15.916 "is_configured": true, 00:11:15.916 "data_offset": 2048, 00:11:15.916 "data_size": 63488 00:11:15.916 } 00:11:15.916 ] 00:11:15.916 }' 00:11:15.916 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:15.916 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.917 [2024-10-01 14:36:07.441399] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.917 "name": "raid_bdev1", 00:11:15.917 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:15.917 "strip_size_kb": 0, 00:11:15.917 "state": "online", 00:11:15.917 "raid_level": "raid1", 00:11:15.917 "superblock": true, 00:11:15.917 "num_base_bdevs": 4, 00:11:15.917 "num_base_bdevs_discovered": 2, 00:11:15.917 "num_base_bdevs_operational": 2, 00:11:15.917 "base_bdevs_list": [ 00:11:15.917 { 00:11:15.917 "name": null, 00:11:15.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.917 "is_configured": false, 00:11:15.917 "data_offset": 0, 00:11:15.917 "data_size": 63488 00:11:15.917 }, 00:11:15.917 { 00:11:15.917 "name": null, 00:11:15.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.917 "is_configured": false, 00:11:15.917 "data_offset": 2048, 00:11:15.917 "data_size": 63488 00:11:15.917 }, 00:11:15.917 { 00:11:15.917 "name": "BaseBdev3", 00:11:15.917 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:15.917 "is_configured": true, 00:11:15.917 "data_offset": 2048, 00:11:15.917 "data_size": 63488 00:11:15.917 }, 00:11:15.917 { 00:11:15.917 "name": "BaseBdev4", 00:11:15.917 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:15.917 "is_configured": true, 00:11:15.917 "data_offset": 2048, 00:11:15.917 "data_size": 63488 00:11:15.917 } 00:11:15.917 ] 00:11:15.917 }' 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.917 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.172 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:16.172 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.172 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.172 [2024-10-01 14:36:07.749467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:16.172 [2024-10-01 14:36:07.749759] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:11:16.172 [2024-10-01 14:36:07.749780] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:16.172 [2024-10-01 14:36:07.749818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:16.172 [2024-10-01 14:36:07.757013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:11:16.172 14:36:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.172 14:36:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:16.172 [2024-10-01 14:36:07.758643] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:17.103 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:17.103 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:17.103 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:17.103 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:17.103 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:17.103 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.103 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.103 14:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.103 14:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.103 14:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:17.360 "name": "raid_bdev1", 00:11:17.360 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:17.360 "strip_size_kb": 0, 00:11:17.360 "state": "online", 00:11:17.360 "raid_level": "raid1", 00:11:17.360 "superblock": true, 00:11:17.360 "num_base_bdevs": 4, 00:11:17.360 "num_base_bdevs_discovered": 3, 00:11:17.360 "num_base_bdevs_operational": 3, 00:11:17.360 "process": { 00:11:17.360 "type": "rebuild", 00:11:17.360 "target": "spare", 00:11:17.360 "progress": { 00:11:17.360 "blocks": 20480, 00:11:17.360 "percent": 32 00:11:17.360 } 00:11:17.360 }, 00:11:17.360 "base_bdevs_list": [ 00:11:17.360 { 00:11:17.360 "name": "spare", 00:11:17.360 "uuid": "3da93670-9665-587d-9d19-5d4a4bd16a22", 00:11:17.360 "is_configured": true, 00:11:17.360 "data_offset": 2048, 00:11:17.360 "data_size": 63488 00:11:17.360 }, 00:11:17.360 { 00:11:17.360 "name": null, 00:11:17.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.360 "is_configured": false, 00:11:17.360 "data_offset": 2048, 00:11:17.360 "data_size": 63488 00:11:17.360 }, 00:11:17.360 { 00:11:17.360 "name": "BaseBdev3", 00:11:17.360 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:17.360 "is_configured": true, 00:11:17.360 "data_offset": 2048, 00:11:17.360 "data_size": 63488 00:11:17.360 }, 00:11:17.360 { 00:11:17.360 "name": "BaseBdev4", 00:11:17.360 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:17.360 "is_configured": true, 00:11:17.360 "data_offset": 2048, 00:11:17.360 "data_size": 63488 00:11:17.360 } 00:11:17.360 ] 00:11:17.360 }' 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.360 [2024-10-01 14:36:08.864997] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:17.360 [2024-10-01 14:36:08.964267] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:17.360 [2024-10-01 14:36:08.964502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.360 [2024-10-01 14:36:08.964522] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:17.360 [2024-10-01 14:36:08.964530] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.360 14:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.360 14:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.360 "name": "raid_bdev1", 00:11:17.360 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:17.360 "strip_size_kb": 0, 00:11:17.360 "state": "online", 00:11:17.360 "raid_level": "raid1", 00:11:17.360 "superblock": true, 00:11:17.360 "num_base_bdevs": 4, 00:11:17.360 "num_base_bdevs_discovered": 2, 00:11:17.360 "num_base_bdevs_operational": 2, 00:11:17.360 "base_bdevs_list": [ 00:11:17.360 { 00:11:17.360 "name": null, 00:11:17.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.360 "is_configured": false, 00:11:17.360 "data_offset": 0, 00:11:17.360 "data_size": 63488 00:11:17.360 }, 00:11:17.360 { 00:11:17.360 "name": null, 00:11:17.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.360 "is_configured": false, 00:11:17.360 "data_offset": 2048, 00:11:17.360 "data_size": 63488 00:11:17.360 }, 00:11:17.360 { 00:11:17.360 "name": "BaseBdev3", 00:11:17.360 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:17.360 "is_configured": true, 00:11:17.360 "data_offset": 2048, 00:11:17.360 "data_size": 63488 00:11:17.360 }, 00:11:17.360 { 00:11:17.360 "name": "BaseBdev4", 00:11:17.360 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:17.360 "is_configured": true, 00:11:17.360 "data_offset": 2048, 00:11:17.360 "data_size": 63488 00:11:17.360 } 00:11:17.360 ] 00:11:17.360 }' 00:11:17.360 14:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.360 14:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.617 14:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:17.617 14:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.617 14:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.617 [2024-10-01 14:36:09.277010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:17.617 [2024-10-01 14:36:09.277067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.617 [2024-10-01 14:36:09.277089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:11:17.617 [2024-10-01 14:36:09.277097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.617 [2024-10-01 14:36:09.277473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.617 [2024-10-01 14:36:09.277497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:17.617 [2024-10-01 14:36:09.277570] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:17.617 [2024-10-01 14:36:09.277580] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:11:17.617 [2024-10-01 14:36:09.277591] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:17.617 [2024-10-01 14:36:09.277625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:17.617 [2024-10-01 14:36:09.284770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:11:17.617 spare 00:11:17.617 14:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.617 14:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:17.617 [2024-10-01 14:36:09.286357] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.035 "name": "raid_bdev1", 00:11:19.035 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:19.035 "strip_size_kb": 0, 00:11:19.035 "state": "online", 00:11:19.035 "raid_level": "raid1", 00:11:19.035 "superblock": true, 00:11:19.035 "num_base_bdevs": 4, 00:11:19.035 "num_base_bdevs_discovered": 3, 00:11:19.035 "num_base_bdevs_operational": 3, 00:11:19.035 "process": { 00:11:19.035 "type": "rebuild", 00:11:19.035 "target": "spare", 00:11:19.035 "progress": { 00:11:19.035 "blocks": 20480, 00:11:19.035 "percent": 32 00:11:19.035 } 00:11:19.035 }, 00:11:19.035 "base_bdevs_list": [ 00:11:19.035 { 00:11:19.035 "name": "spare", 00:11:19.035 "uuid": "3da93670-9665-587d-9d19-5d4a4bd16a22", 00:11:19.035 "is_configured": true, 00:11:19.035 "data_offset": 2048, 00:11:19.035 "data_size": 63488 00:11:19.035 }, 00:11:19.035 { 00:11:19.035 "name": null, 00:11:19.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.035 "is_configured": false, 00:11:19.035 "data_offset": 2048, 00:11:19.035 "data_size": 63488 00:11:19.035 }, 00:11:19.035 { 00:11:19.035 "name": "BaseBdev3", 00:11:19.035 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:19.035 "is_configured": true, 00:11:19.035 "data_offset": 2048, 00:11:19.035 "data_size": 63488 00:11:19.035 }, 00:11:19.035 { 00:11:19.035 "name": "BaseBdev4", 00:11:19.035 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:19.035 "is_configured": true, 00:11:19.035 "data_offset": 2048, 00:11:19.035 "data_size": 63488 00:11:19.035 } 00:11:19.035 ] 00:11:19.035 }' 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.035 [2024-10-01 14:36:10.397108] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:19.035 [2024-10-01 14:36:10.491981] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:19.035 [2024-10-01 14:36:10.492041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.035 [2024-10-01 14:36:10.492055] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:19.035 [2024-10-01 14:36:10.492063] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.035 "name": "raid_bdev1", 00:11:19.035 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:19.035 "strip_size_kb": 0, 00:11:19.035 "state": "online", 00:11:19.035 "raid_level": "raid1", 00:11:19.035 "superblock": true, 00:11:19.035 "num_base_bdevs": 4, 00:11:19.035 "num_base_bdevs_discovered": 2, 00:11:19.035 "num_base_bdevs_operational": 2, 00:11:19.035 "base_bdevs_list": [ 00:11:19.035 { 00:11:19.035 "name": null, 00:11:19.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.035 "is_configured": false, 00:11:19.035 "data_offset": 0, 00:11:19.035 "data_size": 63488 00:11:19.035 }, 00:11:19.035 { 00:11:19.035 "name": null, 00:11:19.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.035 "is_configured": false, 00:11:19.035 "data_offset": 2048, 00:11:19.035 "data_size": 63488 00:11:19.035 }, 00:11:19.035 { 00:11:19.035 "name": "BaseBdev3", 00:11:19.035 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:19.035 "is_configured": true, 00:11:19.035 "data_offset": 2048, 00:11:19.035 "data_size": 63488 00:11:19.035 }, 00:11:19.035 { 00:11:19.035 "name": "BaseBdev4", 00:11:19.035 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:19.035 "is_configured": true, 00:11:19.035 "data_offset": 2048, 00:11:19.035 "data_size": 63488 00:11:19.035 } 00:11:19.035 ] 00:11:19.035 }' 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.035 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.292 "name": "raid_bdev1", 00:11:19.292 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:19.292 "strip_size_kb": 0, 00:11:19.292 "state": "online", 00:11:19.292 "raid_level": "raid1", 00:11:19.292 "superblock": true, 00:11:19.292 "num_base_bdevs": 4, 00:11:19.292 "num_base_bdevs_discovered": 2, 00:11:19.292 "num_base_bdevs_operational": 2, 00:11:19.292 "base_bdevs_list": [ 00:11:19.292 { 00:11:19.292 "name": null, 00:11:19.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.292 "is_configured": false, 00:11:19.292 "data_offset": 0, 00:11:19.292 "data_size": 63488 00:11:19.292 }, 00:11:19.292 { 00:11:19.292 "name": null, 00:11:19.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.292 "is_configured": false, 00:11:19.292 "data_offset": 2048, 00:11:19.292 "data_size": 63488 00:11:19.292 }, 00:11:19.292 { 00:11:19.292 "name": "BaseBdev3", 00:11:19.292 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:19.292 "is_configured": true, 00:11:19.292 "data_offset": 2048, 00:11:19.292 "data_size": 63488 00:11:19.292 }, 00:11:19.292 { 00:11:19.292 "name": "BaseBdev4", 00:11:19.292 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:19.292 "is_configured": true, 00:11:19.292 "data_offset": 2048, 00:11:19.292 "data_size": 63488 00:11:19.292 } 00:11:19.292 ] 00:11:19.292 }' 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.292 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.293 [2024-10-01 14:36:10.912382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:19.293 [2024-10-01 14:36:10.912441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.293 [2024-10-01 14:36:10.912459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:11:19.293 [2024-10-01 14:36:10.912469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.293 [2024-10-01 14:36:10.912841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.293 [2024-10-01 14:36:10.912859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:19.293 [2024-10-01 14:36:10.912922] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:19.293 [2024-10-01 14:36:10.912934] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:11:19.293 [2024-10-01 14:36:10.912940] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:19.293 [2024-10-01 14:36:10.912951] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:19.293 BaseBdev1 00:11:19.293 14:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.293 14:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:20.657 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:20.657 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.657 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.657 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.657 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.657 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:20.657 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.657 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.658 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.658 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.658 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.658 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.658 14:36:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.658 14:36:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.658 14:36:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.658 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.658 "name": "raid_bdev1", 00:11:20.658 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:20.658 "strip_size_kb": 0, 00:11:20.658 "state": "online", 00:11:20.658 "raid_level": "raid1", 00:11:20.658 "superblock": true, 00:11:20.658 "num_base_bdevs": 4, 00:11:20.658 "num_base_bdevs_discovered": 2, 00:11:20.658 "num_base_bdevs_operational": 2, 00:11:20.658 "base_bdevs_list": [ 00:11:20.658 { 00:11:20.658 "name": null, 00:11:20.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.658 "is_configured": false, 00:11:20.658 "data_offset": 0, 00:11:20.658 "data_size": 63488 00:11:20.658 }, 00:11:20.658 { 00:11:20.658 "name": null, 00:11:20.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.658 "is_configured": false, 00:11:20.658 "data_offset": 2048, 00:11:20.658 "data_size": 63488 00:11:20.658 }, 00:11:20.658 { 00:11:20.658 "name": "BaseBdev3", 00:11:20.658 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:20.658 "is_configured": true, 00:11:20.658 "data_offset": 2048, 00:11:20.658 "data_size": 63488 00:11:20.658 }, 00:11:20.658 { 00:11:20.658 "name": "BaseBdev4", 00:11:20.658 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:20.658 "is_configured": true, 00:11:20.658 "data_offset": 2048, 00:11:20.658 "data_size": 63488 00:11:20.658 } 00:11:20.658 ] 00:11:20.658 }' 00:11:20.658 14:36:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.658 14:36:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.658 "name": "raid_bdev1", 00:11:20.658 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:20.658 "strip_size_kb": 0, 00:11:20.658 "state": "online", 00:11:20.658 "raid_level": "raid1", 00:11:20.658 "superblock": true, 00:11:20.658 "num_base_bdevs": 4, 00:11:20.658 "num_base_bdevs_discovered": 2, 00:11:20.658 "num_base_bdevs_operational": 2, 00:11:20.658 "base_bdevs_list": [ 00:11:20.658 { 00:11:20.658 "name": null, 00:11:20.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.658 "is_configured": false, 00:11:20.658 "data_offset": 0, 00:11:20.658 "data_size": 63488 00:11:20.658 }, 00:11:20.658 { 00:11:20.658 "name": null, 00:11:20.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.658 "is_configured": false, 00:11:20.658 "data_offset": 2048, 00:11:20.658 "data_size": 63488 00:11:20.658 }, 00:11:20.658 { 00:11:20.658 "name": "BaseBdev3", 00:11:20.658 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:20.658 "is_configured": true, 00:11:20.658 "data_offset": 2048, 00:11:20.658 "data_size": 63488 00:11:20.658 }, 00:11:20.658 { 00:11:20.658 "name": "BaseBdev4", 00:11:20.658 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:20.658 "is_configured": true, 00:11:20.658 "data_offset": 2048, 00:11:20.658 "data_size": 63488 00:11:20.658 } 00:11:20.658 ] 00:11:20.658 }' 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:20.658 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.914 [2024-10-01 14:36:12.372663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.914 [2024-10-01 14:36:12.372831] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:11:20.914 [2024-10-01 14:36:12.372844] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:20.914 request: 00:11:20.914 { 00:11:20.914 "base_bdev": "BaseBdev1", 00:11:20.914 "raid_bdev": "raid_bdev1", 00:11:20.914 "method": "bdev_raid_add_base_bdev", 00:11:20.914 "req_id": 1 00:11:20.914 } 00:11:20.914 Got JSON-RPC error response 00:11:20.914 response: 00:11:20.914 { 00:11:20.914 "code": -22, 00:11:20.914 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:20.914 } 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:20.914 14:36:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.844 "name": "raid_bdev1", 00:11:21.844 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:21.844 "strip_size_kb": 0, 00:11:21.844 "state": "online", 00:11:21.844 "raid_level": "raid1", 00:11:21.844 "superblock": true, 00:11:21.844 "num_base_bdevs": 4, 00:11:21.844 "num_base_bdevs_discovered": 2, 00:11:21.844 "num_base_bdevs_operational": 2, 00:11:21.844 "base_bdevs_list": [ 00:11:21.844 { 00:11:21.844 "name": null, 00:11:21.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.844 "is_configured": false, 00:11:21.844 "data_offset": 0, 00:11:21.844 "data_size": 63488 00:11:21.844 }, 00:11:21.844 { 00:11:21.844 "name": null, 00:11:21.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.844 "is_configured": false, 00:11:21.844 "data_offset": 2048, 00:11:21.844 "data_size": 63488 00:11:21.844 }, 00:11:21.844 { 00:11:21.844 "name": "BaseBdev3", 00:11:21.844 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:21.844 "is_configured": true, 00:11:21.844 "data_offset": 2048, 00:11:21.844 "data_size": 63488 00:11:21.844 }, 00:11:21.844 { 00:11:21.844 "name": "BaseBdev4", 00:11:21.844 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:21.844 "is_configured": true, 00:11:21.844 "data_offset": 2048, 00:11:21.844 "data_size": 63488 00:11:21.844 } 00:11:21.844 ] 00:11:21.844 }' 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.844 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.101 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:22.101 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:22.101 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:22.101 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:22.101 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:22.101 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.101 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.101 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.101 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.101 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.101 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:22.101 "name": "raid_bdev1", 00:11:22.101 "uuid": "e4debcb9-294d-48c8-a8ac-e095ca55e7ed", 00:11:22.101 "strip_size_kb": 0, 00:11:22.101 "state": "online", 00:11:22.101 "raid_level": "raid1", 00:11:22.101 "superblock": true, 00:11:22.101 "num_base_bdevs": 4, 00:11:22.101 "num_base_bdevs_discovered": 2, 00:11:22.101 "num_base_bdevs_operational": 2, 00:11:22.101 "base_bdevs_list": [ 00:11:22.101 { 00:11:22.101 "name": null, 00:11:22.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.101 "is_configured": false, 00:11:22.101 "data_offset": 0, 00:11:22.101 "data_size": 63488 00:11:22.101 }, 00:11:22.101 { 00:11:22.101 "name": null, 00:11:22.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.101 "is_configured": false, 00:11:22.101 "data_offset": 2048, 00:11:22.101 "data_size": 63488 00:11:22.101 }, 00:11:22.101 { 00:11:22.101 "name": "BaseBdev3", 00:11:22.101 "uuid": "d3e9130e-350f-5a7b-a625-1c2e5d0032ae", 00:11:22.101 "is_configured": true, 00:11:22.101 "data_offset": 2048, 00:11:22.101 "data_size": 63488 00:11:22.101 }, 00:11:22.101 { 00:11:22.101 "name": "BaseBdev4", 00:11:22.101 "uuid": "ac1ba125-fb59-5dbc-985b-bf3995341a97", 00:11:22.101 "is_configured": true, 00:11:22.101 "data_offset": 2048, 00:11:22.101 "data_size": 63488 00:11:22.101 } 00:11:22.101 ] 00:11:22.101 }' 00:11:22.101 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:22.101 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:22.102 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:22.102 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:22.102 14:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76091 00:11:22.102 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 76091 ']' 00:11:22.102 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 76091 00:11:22.102 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:22.102 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:22.102 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76091 00:11:22.359 killing process with pid 76091 00:11:22.359 Received shutdown signal, test time was about 60.000000 seconds 00:11:22.359 00:11:22.359 Latency(us) 00:11:22.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.359 =================================================================================================================== 00:11:22.359 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:22.359 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:22.359 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:22.359 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76091' 00:11:22.359 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 76091 00:11:22.359 [2024-10-01 14:36:13.801980] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.359 14:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 76091 00:11:22.359 [2024-10-01 14:36:13.802069] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.359 [2024-10-01 14:36:13.802122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.359 [2024-10-01 14:36:13.802130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:11:22.615 [2024-10-01 14:36:14.042916] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.179 ************************************ 00:11:23.179 END TEST raid_rebuild_test_sb 00:11:23.179 ************************************ 00:11:23.179 14:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:23.179 00:11:23.179 real 0m22.843s 00:11:23.179 user 0m26.509s 00:11:23.179 sys 0m3.249s 00:11:23.179 14:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.179 14:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.179 14:36:14 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:11:23.179 14:36:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:23.179 14:36:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.179 14:36:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.179 ************************************ 00:11:23.179 START TEST raid_rebuild_test_io 00:11:23.179 ************************************ 00:11:23.179 14:36:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:11:23.179 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:23.179 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:23.179 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:23.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76833 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76833 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 76833 ']' 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.180 14:36:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.180 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:23.180 Zero copy mechanism will not be used. 00:11:23.180 [2024-10-01 14:36:14.806445] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:11:23.180 [2024-10-01 14:36:14.806569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76833 ] 00:11:23.437 [2024-10-01 14:36:14.954726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.695 [2024-10-01 14:36:15.140881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.695 [2024-10-01 14:36:15.278380] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.695 [2024-10-01 14:36:15.278429] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.261 BaseBdev1_malloc 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.261 [2024-10-01 14:36:15.692215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:24.261 [2024-10-01 14:36:15.692273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.261 [2024-10-01 14:36:15.692294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:24.261 [2024-10-01 14:36:15.692308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.261 [2024-10-01 14:36:15.694441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.261 [2024-10-01 14:36:15.694479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:24.261 BaseBdev1 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.261 BaseBdev2_malloc 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.261 [2024-10-01 14:36:15.743422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:24.261 [2024-10-01 14:36:15.743486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.261 [2024-10-01 14:36:15.743504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:24.261 [2024-10-01 14:36:15.743517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.261 [2024-10-01 14:36:15.745630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.261 [2024-10-01 14:36:15.745796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:24.261 BaseBdev2 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.261 BaseBdev3_malloc 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.261 [2024-10-01 14:36:15.779195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:24.261 [2024-10-01 14:36:15.779243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.261 [2024-10-01 14:36:15.779262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:24.261 [2024-10-01 14:36:15.779273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.261 [2024-10-01 14:36:15.781338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.261 [2024-10-01 14:36:15.781377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:24.261 BaseBdev3 00:11:24.261 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.262 BaseBdev4_malloc 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.262 [2024-10-01 14:36:15.815329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:24.262 [2024-10-01 14:36:15.815377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.262 [2024-10-01 14:36:15.815392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:24.262 [2024-10-01 14:36:15.815402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.262 [2024-10-01 14:36:15.817481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.262 [2024-10-01 14:36:15.817519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:24.262 BaseBdev4 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.262 spare_malloc 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.262 spare_delay 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.262 [2024-10-01 14:36:15.859395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:24.262 [2024-10-01 14:36:15.859449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.262 [2024-10-01 14:36:15.859468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:24.262 [2024-10-01 14:36:15.859478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.262 [2024-10-01 14:36:15.861587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.262 [2024-10-01 14:36:15.861639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:24.262 spare 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.262 [2024-10-01 14:36:15.867449] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.262 [2024-10-01 14:36:15.869384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.262 [2024-10-01 14:36:15.869525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.262 [2024-10-01 14:36:15.869598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.262 [2024-10-01 14:36:15.869765] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:24.262 [2024-10-01 14:36:15.869887] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:24.262 [2024-10-01 14:36:15.870190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:24.262 [2024-10-01 14:36:15.870411] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:24.262 [2024-10-01 14:36:15.870480] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:24.262 [2024-10-01 14:36:15.870676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.262 "name": "raid_bdev1", 00:11:24.262 "uuid": "c0b32f3c-f4fa-4688-88ce-34dff92d9cd8", 00:11:24.262 "strip_size_kb": 0, 00:11:24.262 "state": "online", 00:11:24.262 "raid_level": "raid1", 00:11:24.262 "superblock": false, 00:11:24.262 "num_base_bdevs": 4, 00:11:24.262 "num_base_bdevs_discovered": 4, 00:11:24.262 "num_base_bdevs_operational": 4, 00:11:24.262 "base_bdevs_list": [ 00:11:24.262 { 00:11:24.262 "name": "BaseBdev1", 00:11:24.262 "uuid": "d6f83b91-39df-50c9-984c-7f69d76fc501", 00:11:24.262 "is_configured": true, 00:11:24.262 "data_offset": 0, 00:11:24.262 "data_size": 65536 00:11:24.262 }, 00:11:24.262 { 00:11:24.262 "name": "BaseBdev2", 00:11:24.262 "uuid": "73affa40-05e6-5a12-a32c-4fd48722a89b", 00:11:24.262 "is_configured": true, 00:11:24.262 "data_offset": 0, 00:11:24.262 "data_size": 65536 00:11:24.262 }, 00:11:24.262 { 00:11:24.262 "name": "BaseBdev3", 00:11:24.262 "uuid": "4eaf4be0-ebdc-5709-afcd-b6d4217c1933", 00:11:24.262 "is_configured": true, 00:11:24.262 "data_offset": 0, 00:11:24.262 "data_size": 65536 00:11:24.262 }, 00:11:24.262 { 00:11:24.262 "name": "BaseBdev4", 00:11:24.262 "uuid": "97f51943-40b4-5988-979d-189779714e17", 00:11:24.262 "is_configured": true, 00:11:24.262 "data_offset": 0, 00:11:24.262 "data_size": 65536 00:11:24.262 } 00:11:24.262 ] 00:11:24.262 }' 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.262 14:36:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:24.830 [2024-10-01 14:36:16.211870] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:24.830 [2024-10-01 14:36:16.283517] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.830 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.831 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.831 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.831 "name": "raid_bdev1", 00:11:24.831 "uuid": "c0b32f3c-f4fa-4688-88ce-34dff92d9cd8", 00:11:24.831 "strip_size_kb": 0, 00:11:24.831 "state": "online", 00:11:24.831 "raid_level": "raid1", 00:11:24.831 "superblock": false, 00:11:24.831 "num_base_bdevs": 4, 00:11:24.831 "num_base_bdevs_discovered": 3, 00:11:24.831 "num_base_bdevs_operational": 3, 00:11:24.831 "base_bdevs_list": [ 00:11:24.831 { 00:11:24.831 "name": null, 00:11:24.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.831 "is_configured": false, 00:11:24.831 "data_offset": 0, 00:11:24.831 "data_size": 65536 00:11:24.831 }, 00:11:24.831 { 00:11:24.831 "name": "BaseBdev2", 00:11:24.831 "uuid": "73affa40-05e6-5a12-a32c-4fd48722a89b", 00:11:24.831 "is_configured": true, 00:11:24.831 "data_offset": 0, 00:11:24.831 "data_size": 65536 00:11:24.831 }, 00:11:24.831 { 00:11:24.831 "name": "BaseBdev3", 00:11:24.831 "uuid": "4eaf4be0-ebdc-5709-afcd-b6d4217c1933", 00:11:24.831 "is_configured": true, 00:11:24.831 "data_offset": 0, 00:11:24.831 "data_size": 65536 00:11:24.831 }, 00:11:24.831 { 00:11:24.831 "name": "BaseBdev4", 00:11:24.831 "uuid": "97f51943-40b4-5988-979d-189779714e17", 00:11:24.831 "is_configured": true, 00:11:24.831 "data_offset": 0, 00:11:24.831 "data_size": 65536 00:11:24.831 } 00:11:24.831 ] 00:11:24.831 }' 00:11:24.831 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.831 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.831 [2024-10-01 14:36:16.372887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:24.831 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:24.831 Zero copy mechanism will not be used. 00:11:24.831 Running I/O for 60 seconds... 00:11:25.088 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:25.088 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.088 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.088 [2024-10-01 14:36:16.609564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:25.088 14:36:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.088 14:36:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:25.088 [2024-10-01 14:36:16.654277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:25.088 [2024-10-01 14:36:16.656392] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:25.345 [2024-10-01 14:36:16.772156] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:25.345 [2024-10-01 14:36:16.773310] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:25.345 [2024-10-01 14:36:16.999503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:25.345 [2024-10-01 14:36:17.000266] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:25.916 [2024-10-01 14:36:17.329774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:25.916 [2024-10-01 14:36:17.331119] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:25.916 104.00 IOPS, 312.00 MiB/s [2024-10-01 14:36:17.575728] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:25.916 [2024-10-01 14:36:17.576491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:26.176 "name": "raid_bdev1", 00:11:26.176 "uuid": "c0b32f3c-f4fa-4688-88ce-34dff92d9cd8", 00:11:26.176 "strip_size_kb": 0, 00:11:26.176 "state": "online", 00:11:26.176 "raid_level": "raid1", 00:11:26.176 "superblock": false, 00:11:26.176 "num_base_bdevs": 4, 00:11:26.176 "num_base_bdevs_discovered": 4, 00:11:26.176 "num_base_bdevs_operational": 4, 00:11:26.176 "process": { 00:11:26.176 "type": "rebuild", 00:11:26.176 "target": "spare", 00:11:26.176 "progress": { 00:11:26.176 "blocks": 10240, 00:11:26.176 "percent": 15 00:11:26.176 } 00:11:26.176 }, 00:11:26.176 "base_bdevs_list": [ 00:11:26.176 { 00:11:26.176 "name": "spare", 00:11:26.176 "uuid": "02beb028-a4ab-5dec-b786-660e02cab985", 00:11:26.176 "is_configured": true, 00:11:26.176 "data_offset": 0, 00:11:26.176 "data_size": 65536 00:11:26.176 }, 00:11:26.176 { 00:11:26.176 "name": "BaseBdev2", 00:11:26.176 "uuid": "73affa40-05e6-5a12-a32c-4fd48722a89b", 00:11:26.176 "is_configured": true, 00:11:26.176 "data_offset": 0, 00:11:26.176 "data_size": 65536 00:11:26.176 }, 00:11:26.176 { 00:11:26.176 "name": "BaseBdev3", 00:11:26.176 "uuid": "4eaf4be0-ebdc-5709-afcd-b6d4217c1933", 00:11:26.176 "is_configured": true, 00:11:26.176 "data_offset": 0, 00:11:26.176 "data_size": 65536 00:11:26.176 }, 00:11:26.176 { 00:11:26.176 "name": "BaseBdev4", 00:11:26.176 "uuid": "97f51943-40b4-5988-979d-189779714e17", 00:11:26.176 "is_configured": true, 00:11:26.176 "data_offset": 0, 00:11:26.176 "data_size": 65536 00:11:26.176 } 00:11:26.176 ] 00:11:26.176 }' 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.176 14:36:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.176 [2024-10-01 14:36:17.733310] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:26.438 [2024-10-01 14:36:17.891142] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:26.438 [2024-10-01 14:36:17.894432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.438 [2024-10-01 14:36:17.894570] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:26.438 [2024-10-01 14:36:17.894594] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:26.438 [2024-10-01 14:36:17.925813] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.438 "name": "raid_bdev1", 00:11:26.438 "uuid": "c0b32f3c-f4fa-4688-88ce-34dff92d9cd8", 00:11:26.438 "strip_size_kb": 0, 00:11:26.438 "state": "online", 00:11:26.438 "raid_level": "raid1", 00:11:26.438 "superblock": false, 00:11:26.438 "num_base_bdevs": 4, 00:11:26.438 "num_base_bdevs_discovered": 3, 00:11:26.438 "num_base_bdevs_operational": 3, 00:11:26.438 "base_bdevs_list": [ 00:11:26.438 { 00:11:26.438 "name": null, 00:11:26.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.438 "is_configured": false, 00:11:26.438 "data_offset": 0, 00:11:26.438 "data_size": 65536 00:11:26.438 }, 00:11:26.438 { 00:11:26.438 "name": "BaseBdev2", 00:11:26.438 "uuid": "73affa40-05e6-5a12-a32c-4fd48722a89b", 00:11:26.438 "is_configured": true, 00:11:26.438 "data_offset": 0, 00:11:26.438 "data_size": 65536 00:11:26.438 }, 00:11:26.438 { 00:11:26.438 "name": "BaseBdev3", 00:11:26.438 "uuid": "4eaf4be0-ebdc-5709-afcd-b6d4217c1933", 00:11:26.438 "is_configured": true, 00:11:26.438 "data_offset": 0, 00:11:26.438 "data_size": 65536 00:11:26.438 }, 00:11:26.438 { 00:11:26.438 "name": "BaseBdev4", 00:11:26.438 "uuid": "97f51943-40b4-5988-979d-189779714e17", 00:11:26.438 "is_configured": true, 00:11:26.438 "data_offset": 0, 00:11:26.438 "data_size": 65536 00:11:26.438 } 00:11:26.438 ] 00:11:26.438 }' 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.438 14:36:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:26.699 "name": "raid_bdev1", 00:11:26.699 "uuid": "c0b32f3c-f4fa-4688-88ce-34dff92d9cd8", 00:11:26.699 "strip_size_kb": 0, 00:11:26.699 "state": "online", 00:11:26.699 "raid_level": "raid1", 00:11:26.699 "superblock": false, 00:11:26.699 "num_base_bdevs": 4, 00:11:26.699 "num_base_bdevs_discovered": 3, 00:11:26.699 "num_base_bdevs_operational": 3, 00:11:26.699 "base_bdevs_list": [ 00:11:26.699 { 00:11:26.699 "name": null, 00:11:26.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.699 "is_configured": false, 00:11:26.699 "data_offset": 0, 00:11:26.699 "data_size": 65536 00:11:26.699 }, 00:11:26.699 { 00:11:26.699 "name": "BaseBdev2", 00:11:26.699 "uuid": "73affa40-05e6-5a12-a32c-4fd48722a89b", 00:11:26.699 "is_configured": true, 00:11:26.699 "data_offset": 0, 00:11:26.699 "data_size": 65536 00:11:26.699 }, 00:11:26.699 { 00:11:26.699 "name": "BaseBdev3", 00:11:26.699 "uuid": "4eaf4be0-ebdc-5709-afcd-b6d4217c1933", 00:11:26.699 "is_configured": true, 00:11:26.699 "data_offset": 0, 00:11:26.699 "data_size": 65536 00:11:26.699 }, 00:11:26.699 { 00:11:26.699 "name": "BaseBdev4", 00:11:26.699 "uuid": "97f51943-40b4-5988-979d-189779714e17", 00:11:26.699 "is_configured": true, 00:11:26.699 "data_offset": 0, 00:11:26.699 "data_size": 65536 00:11:26.699 } 00:11:26.699 ] 00:11:26.699 }' 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.699 14:36:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.699 [2024-10-01 14:36:18.371653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:26.960 135.00 IOPS, 405.00 MiB/s 14:36:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.960 14:36:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:26.960 [2024-10-01 14:36:18.439581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:26.960 [2024-10-01 14:36:18.441611] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:26.960 [2024-10-01 14:36:18.567692] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:26.960 [2024-10-01 14:36:18.569009] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:27.220 [2024-10-01 14:36:18.813875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:27.786 [2024-10-01 14:36:19.216350] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:27.786 [2024-10-01 14:36:19.217046] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:27.786 115.67 IOPS, 347.00 MiB/s 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:27.786 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.786 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:27.786 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:27.786 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.786 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.786 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.786 14:36:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.786 14:36:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.786 14:36:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.786 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.786 "name": "raid_bdev1", 00:11:27.786 "uuid": "c0b32f3c-f4fa-4688-88ce-34dff92d9cd8", 00:11:27.786 "strip_size_kb": 0, 00:11:27.786 "state": "online", 00:11:27.786 "raid_level": "raid1", 00:11:27.786 "superblock": false, 00:11:27.786 "num_base_bdevs": 4, 00:11:27.786 "num_base_bdevs_discovered": 4, 00:11:27.786 "num_base_bdevs_operational": 4, 00:11:27.786 "process": { 00:11:27.786 "type": "rebuild", 00:11:27.786 "target": "spare", 00:11:27.786 "progress": { 00:11:27.786 "blocks": 12288, 00:11:27.786 "percent": 18 00:11:27.786 } 00:11:27.786 }, 00:11:27.786 "base_bdevs_list": [ 00:11:27.786 { 00:11:27.786 "name": "spare", 00:11:27.786 "uuid": "02beb028-a4ab-5dec-b786-660e02cab985", 00:11:27.786 "is_configured": true, 00:11:27.786 "data_offset": 0, 00:11:27.786 "data_size": 65536 00:11:27.786 }, 00:11:27.786 { 00:11:27.786 "name": "BaseBdev2", 00:11:27.786 "uuid": "73affa40-05e6-5a12-a32c-4fd48722a89b", 00:11:27.786 "is_configured": true, 00:11:27.786 "data_offset": 0, 00:11:27.786 "data_size": 65536 00:11:27.786 }, 00:11:27.786 { 00:11:27.786 "name": "BaseBdev3", 00:11:27.786 "uuid": "4eaf4be0-ebdc-5709-afcd-b6d4217c1933", 00:11:27.786 "is_configured": true, 00:11:27.786 "data_offset": 0, 00:11:27.786 "data_size": 65536 00:11:27.786 }, 00:11:27.786 { 00:11:27.786 "name": "BaseBdev4", 00:11:27.786 "uuid": "97f51943-40b4-5988-979d-189779714e17", 00:11:27.786 "is_configured": true, 00:11:27.786 "data_offset": 0, 00:11:27.786 "data_size": 65536 00:11:27.786 } 00:11:27.786 ] 00:11:27.786 }' 00:11:27.786 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.044 [2024-10-01 14:36:19.533935] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:28.044 [2024-10-01 14:36:19.535850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:28.044 [2024-10-01 14:36:19.536243] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:28.044 [2024-10-01 14:36:19.542125] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:11:28.044 [2024-10-01 14:36:19.542228] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.044 "name": "raid_bdev1", 00:11:28.044 "uuid": "c0b32f3c-f4fa-4688-88ce-34dff92d9cd8", 00:11:28.044 "strip_size_kb": 0, 00:11:28.044 "state": "online", 00:11:28.044 "raid_level": "raid1", 00:11:28.044 "superblock": false, 00:11:28.044 "num_base_bdevs": 4, 00:11:28.044 "num_base_bdevs_discovered": 3, 00:11:28.044 "num_base_bdevs_operational": 3, 00:11:28.044 "process": { 00:11:28.044 "type": "rebuild", 00:11:28.044 "target": "spare", 00:11:28.044 "progress": { 00:11:28.044 "blocks": 14336, 00:11:28.044 "percent": 21 00:11:28.044 } 00:11:28.044 }, 00:11:28.044 "base_bdevs_list": [ 00:11:28.044 { 00:11:28.044 "name": "spare", 00:11:28.044 "uuid": "02beb028-a4ab-5dec-b786-660e02cab985", 00:11:28.044 "is_configured": true, 00:11:28.044 "data_offset": 0, 00:11:28.044 "data_size": 65536 00:11:28.044 }, 00:11:28.044 { 00:11:28.044 "name": null, 00:11:28.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.044 "is_configured": false, 00:11:28.044 "data_offset": 0, 00:11:28.044 "data_size": 65536 00:11:28.044 }, 00:11:28.044 { 00:11:28.044 "name": "BaseBdev3", 00:11:28.044 "uuid": "4eaf4be0-ebdc-5709-afcd-b6d4217c1933", 00:11:28.044 "is_configured": true, 00:11:28.044 "data_offset": 0, 00:11:28.044 "data_size": 65536 00:11:28.044 }, 00:11:28.044 { 00:11:28.044 "name": "BaseBdev4", 00:11:28.044 "uuid": "97f51943-40b4-5988-979d-189779714e17", 00:11:28.044 "is_configured": true, 00:11:28.044 "data_offset": 0, 00:11:28.044 "data_size": 65536 00:11:28.044 } 00:11:28.044 ] 00:11:28.044 }' 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=399 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.044 [2024-10-01 14:36:19.662377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.044 "name": "raid_bdev1", 00:11:28.044 "uuid": "c0b32f3c-f4fa-4688-88ce-34dff92d9cd8", 00:11:28.044 "strip_size_kb": 0, 00:11:28.044 "state": "online", 00:11:28.044 "raid_level": "raid1", 00:11:28.044 "superblock": false, 00:11:28.044 "num_base_bdevs": 4, 00:11:28.044 "num_base_bdevs_discovered": 3, 00:11:28.044 "num_base_bdevs_operational": 3, 00:11:28.044 "process": { 00:11:28.044 "type": "rebuild", 00:11:28.044 "target": "spare", 00:11:28.044 "progress": { 00:11:28.044 "blocks": 14336, 00:11:28.044 "percent": 21 00:11:28.044 } 00:11:28.044 }, 00:11:28.044 "base_bdevs_list": [ 00:11:28.044 { 00:11:28.044 "name": "spare", 00:11:28.044 "uuid": "02beb028-a4ab-5dec-b786-660e02cab985", 00:11:28.044 "is_configured": true, 00:11:28.044 "data_offset": 0, 00:11:28.044 "data_size": 65536 00:11:28.044 }, 00:11:28.044 { 00:11:28.044 "name": null, 00:11:28.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.044 "is_configured": false, 00:11:28.044 "data_offset": 0, 00:11:28.044 "data_size": 65536 00:11:28.044 }, 00:11:28.044 { 00:11:28.044 "name": "BaseBdev3", 00:11:28.044 "uuid": "4eaf4be0-ebdc-5709-afcd-b6d4217c1933", 00:11:28.044 "is_configured": true, 00:11:28.044 "data_offset": 0, 00:11:28.044 "data_size": 65536 00:11:28.044 }, 00:11:28.044 { 00:11:28.044 "name": "BaseBdev4", 00:11:28.044 "uuid": "97f51943-40b4-5988-979d-189779714e17", 00:11:28.044 "is_configured": true, 00:11:28.044 "data_offset": 0, 00:11:28.044 "data_size": 65536 00:11:28.044 } 00:11:28.044 ] 00:11:28.044 }' 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:28.044 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.301 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:28.301 14:36:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:28.559 [2024-10-01 14:36:19.995541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:28.559 [2024-10-01 14:36:20.001156] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:28.559 [2024-10-01 14:36:20.223084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:28.817 101.50 IOPS, 304.50 MiB/s [2024-10-01 14:36:20.427653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:29.075 [2024-10-01 14:36:20.551596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:29.075 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:29.075 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:29.075 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:29.075 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:29.075 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:29.075 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:29.075 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.075 14:36:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.075 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.075 14:36:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.332 14:36:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.332 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:29.332 "name": "raid_bdev1", 00:11:29.332 "uuid": "c0b32f3c-f4fa-4688-88ce-34dff92d9cd8", 00:11:29.332 "strip_size_kb": 0, 00:11:29.332 "state": "online", 00:11:29.332 "raid_level": "raid1", 00:11:29.332 "superblock": false, 00:11:29.332 "num_base_bdevs": 4, 00:11:29.332 "num_base_bdevs_discovered": 3, 00:11:29.332 "num_base_bdevs_operational": 3, 00:11:29.332 "process": { 00:11:29.332 "type": "rebuild", 00:11:29.332 "target": "spare", 00:11:29.332 "progress": { 00:11:29.332 "blocks": 28672, 00:11:29.332 "percent": 43 00:11:29.332 } 00:11:29.332 }, 00:11:29.332 "base_bdevs_list": [ 00:11:29.332 { 00:11:29.332 "name": "spare", 00:11:29.332 "uuid": "02beb028-a4ab-5dec-b786-660e02cab985", 00:11:29.332 "is_configured": true, 00:11:29.332 "data_offset": 0, 00:11:29.332 "data_size": 65536 00:11:29.332 }, 00:11:29.332 { 00:11:29.332 "name": null, 00:11:29.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.332 "is_configured": false, 00:11:29.332 "data_offset": 0, 00:11:29.332 "data_size": 65536 00:11:29.332 }, 00:11:29.332 { 00:11:29.332 "name": "BaseBdev3", 00:11:29.332 "uuid": "4eaf4be0-ebdc-5709-afcd-b6d4217c1933", 00:11:29.332 "is_configured": true, 00:11:29.332 "data_offset": 0, 00:11:29.332 "data_size": 65536 00:11:29.332 }, 00:11:29.332 { 00:11:29.332 "name": "BaseBdev4", 00:11:29.332 "uuid": "97f51943-40b4-5988-979d-189779714e17", 00:11:29.332 "is_configured": true, 00:11:29.332 "data_offset": 0, 00:11:29.332 "data_size": 65536 00:11:29.332 } 00:11:29.332 ] 00:11:29.332 }' 00:11:29.332 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.332 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:29.332 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.332 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:29.332 14:36:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:29.333 [2024-10-01 14:36:20.997761] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:29.896 [2024-10-01 14:36:21.318154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:30.153 88.60 IOPS, 265.80 MiB/s [2024-10-01 14:36:21.640946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.447 "name": "raid_bdev1", 00:11:30.447 "uuid": "c0b32f3c-f4fa-4688-88ce-34dff92d9cd8", 00:11:30.447 "strip_size_kb": 0, 00:11:30.447 "state": "online", 00:11:30.447 "raid_level": "raid1", 00:11:30.447 "superblock": false, 00:11:30.447 "num_base_bdevs": 4, 00:11:30.447 "num_base_bdevs_discovered": 3, 00:11:30.447 "num_base_bdevs_operational": 3, 00:11:30.447 "process": { 00:11:30.447 "type": "rebuild", 00:11:30.447 "target": "spare", 00:11:30.447 "progress": { 00:11:30.447 "blocks": 49152, 00:11:30.447 "percent": 75 00:11:30.447 } 00:11:30.447 }, 00:11:30.447 "base_bdevs_list": [ 00:11:30.447 { 00:11:30.447 "name": "spare", 00:11:30.447 "uuid": "02beb028-a4ab-5dec-b786-660e02cab985", 00:11:30.447 "is_configured": true, 00:11:30.447 "data_offset": 0, 00:11:30.447 "data_size": 65536 00:11:30.447 }, 00:11:30.447 { 00:11:30.447 "name": null, 00:11:30.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.447 "is_configured": false, 00:11:30.447 "data_offset": 0, 00:11:30.447 "data_size": 65536 00:11:30.447 }, 00:11:30.447 { 00:11:30.447 "name": "BaseBdev3", 00:11:30.447 "uuid": "4eaf4be0-ebdc-5709-afcd-b6d4217c1933", 00:11:30.447 "is_configured": true, 00:11:30.447 "data_offset": 0, 00:11:30.447 "data_size": 65536 00:11:30.447 }, 00:11:30.447 { 00:11:30.447 "name": "BaseBdev4", 00:11:30.447 "uuid": "97f51943-40b4-5988-979d-189779714e17", 00:11:30.447 "is_configured": true, 00:11:30.447 "data_offset": 0, 00:11:30.447 "data_size": 65536 00:11:30.447 } 00:11:30.447 ] 00:11:30.447 }' 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:30.447 14:36:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:30.447 [2024-10-01 14:36:22.076327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:30.704 [2024-10-01 14:36:22.290602] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:11:31.268 81.17 IOPS, 243.50 MiB/s [2024-10-01 14:36:22.725200] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:31.268 [2024-10-01 14:36:22.830234] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:31.268 [2024-10-01 14:36:22.832292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.525 14:36:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:31.525 14:36:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:31.525 14:36:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.525 14:36:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:31.525 14:36:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:31.525 14:36:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:31.525 14:36:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.525 14:36:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.525 14:36:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.525 14:36:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.525 14:36:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.525 14:36:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:31.525 "name": "raid_bdev1", 00:11:31.525 "uuid": "c0b32f3c-f4fa-4688-88ce-34dff92d9cd8", 00:11:31.525 "strip_size_kb": 0, 00:11:31.525 "state": "online", 00:11:31.525 "raid_level": "raid1", 00:11:31.525 "superblock": false, 00:11:31.525 "num_base_bdevs": 4, 00:11:31.525 "num_base_bdevs_discovered": 3, 00:11:31.525 "num_base_bdevs_operational": 3, 00:11:31.525 "base_bdevs_list": [ 00:11:31.525 { 00:11:31.525 "name": "spare", 00:11:31.525 "uuid": "02beb028-a4ab-5dec-b786-660e02cab985", 00:11:31.525 "is_configured": true, 00:11:31.525 "data_offset": 0, 00:11:31.525 "data_size": 65536 00:11:31.525 }, 00:11:31.525 { 00:11:31.525 "name": null, 00:11:31.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.525 "is_configured": false, 00:11:31.525 "data_offset": 0, 00:11:31.525 "data_size": 65536 00:11:31.525 }, 00:11:31.525 { 00:11:31.525 "name": "BaseBdev3", 00:11:31.525 "uuid": "4eaf4be0-ebdc-5709-afcd-b6d4217c1933", 00:11:31.525 "is_configured": true, 00:11:31.525 "data_offset": 0, 00:11:31.525 "data_size": 65536 00:11:31.525 }, 00:11:31.525 { 00:11:31.525 "name": "BaseBdev4", 00:11:31.525 "uuid": "97f51943-40b4-5988-979d-189779714e17", 00:11:31.525 "is_configured": true, 00:11:31.525 "data_offset": 0, 00:11:31.525 "data_size": 65536 00:11:31.525 } 00:11:31.525 ] 00:11:31.525 }' 00:11:31.525 14:36:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.525 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:31.525 "name": "raid_bdev1", 00:11:31.525 "uuid": "c0b32f3c-f4fa-4688-88ce-34dff92d9cd8", 00:11:31.525 "strip_size_kb": 0, 00:11:31.525 "state": "online", 00:11:31.525 "raid_level": "raid1", 00:11:31.525 "superblock": false, 00:11:31.525 "num_base_bdevs": 4, 00:11:31.525 "num_base_bdevs_discovered": 3, 00:11:31.525 "num_base_bdevs_operational": 3, 00:11:31.525 "base_bdevs_list": [ 00:11:31.525 { 00:11:31.525 "name": "spare", 00:11:31.525 "uuid": "02beb028-a4ab-5dec-b786-660e02cab985", 00:11:31.525 "is_configured": true, 00:11:31.525 "data_offset": 0, 00:11:31.525 "data_size": 65536 00:11:31.525 }, 00:11:31.525 { 00:11:31.526 "name": null, 00:11:31.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.526 "is_configured": false, 00:11:31.526 "data_offset": 0, 00:11:31.526 "data_size": 65536 00:11:31.526 }, 00:11:31.526 { 00:11:31.526 "name": "BaseBdev3", 00:11:31.526 "uuid": "4eaf4be0-ebdc-5709-afcd-b6d4217c1933", 00:11:31.526 "is_configured": true, 00:11:31.526 "data_offset": 0, 00:11:31.526 "data_size": 65536 00:11:31.526 }, 00:11:31.526 { 00:11:31.526 "name": "BaseBdev4", 00:11:31.526 "uuid": "97f51943-40b4-5988-979d-189779714e17", 00:11:31.526 "is_configured": true, 00:11:31.526 "data_offset": 0, 00:11:31.526 "data_size": 65536 00:11:31.526 } 00:11:31.526 ] 00:11:31.526 }' 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.526 "name": "raid_bdev1", 00:11:31.526 "uuid": "c0b32f3c-f4fa-4688-88ce-34dff92d9cd8", 00:11:31.526 "strip_size_kb": 0, 00:11:31.526 "state": "online", 00:11:31.526 "raid_level": "raid1", 00:11:31.526 "superblock": false, 00:11:31.526 "num_base_bdevs": 4, 00:11:31.526 "num_base_bdevs_discovered": 3, 00:11:31.526 "num_base_bdevs_operational": 3, 00:11:31.526 "base_bdevs_list": [ 00:11:31.526 { 00:11:31.526 "name": "spare", 00:11:31.526 "uuid": "02beb028-a4ab-5dec-b786-660e02cab985", 00:11:31.526 "is_configured": true, 00:11:31.526 "data_offset": 0, 00:11:31.526 "data_size": 65536 00:11:31.526 }, 00:11:31.526 { 00:11:31.526 "name": null, 00:11:31.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.526 "is_configured": false, 00:11:31.526 "data_offset": 0, 00:11:31.526 "data_size": 65536 00:11:31.526 }, 00:11:31.526 { 00:11:31.526 "name": "BaseBdev3", 00:11:31.526 "uuid": "4eaf4be0-ebdc-5709-afcd-b6d4217c1933", 00:11:31.526 "is_configured": true, 00:11:31.526 "data_offset": 0, 00:11:31.526 "data_size": 65536 00:11:31.526 }, 00:11:31.526 { 00:11:31.526 "name": "BaseBdev4", 00:11:31.526 "uuid": "97f51943-40b4-5988-979d-189779714e17", 00:11:31.526 "is_configured": true, 00:11:31.526 "data_offset": 0, 00:11:31.526 "data_size": 65536 00:11:31.526 } 00:11:31.526 ] 00:11:31.526 }' 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.526 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.783 74.00 IOPS, 222.00 MiB/s 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:31.783 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.783 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.783 [2024-10-01 14:36:23.441579] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:31.783 [2024-10-01 14:36:23.441604] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.041 00:11:32.041 Latency(us) 00:11:32.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:32.041 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:32.041 raid_bdev1 : 7.10 73.26 219.79 0.00 0.00 18545.10 256.79 119376.34 00:11:32.041 =================================================================================================================== 00:11:32.041 Total : 73.26 219.79 0.00 0.00 18545.10 256.79 119376.34 00:11:32.041 { 00:11:32.041 "results": [ 00:11:32.041 { 00:11:32.041 "job": "raid_bdev1", 00:11:32.041 "core_mask": "0x1", 00:11:32.041 "workload": "randrw", 00:11:32.041 "percentage": 50, 00:11:32.041 "status": "finished", 00:11:32.041 "queue_depth": 2, 00:11:32.041 "io_size": 3145728, 00:11:32.041 "runtime": 7.097623, 00:11:32.041 "iops": 73.26396456954673, 00:11:32.041 "mibps": 219.7918937086402, 00:11:32.041 "io_failed": 0, 00:11:32.041 "io_timeout": 0, 00:11:32.041 "avg_latency_us": 18545.095952662723, 00:11:32.041 "min_latency_us": 256.7876923076923, 00:11:32.041 "max_latency_us": 119376.34461538462 00:11:32.041 } 00:11:32.041 ], 00:11:32.041 "core_count": 1 00:11:32.041 } 00:11:32.041 [2024-10-01 14:36:23.484835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.041 [2024-10-01 14:36:23.484875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.041 [2024-10-01 14:36:23.484962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.041 [2024-10-01 14:36:23.484972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:32.041 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:32.299 /dev/nbd0 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.299 1+0 records in 00:11:32.299 1+0 records out 00:11:32.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295646 s, 13.9 MB/s 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:32.299 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:11:32.299 /dev/nbd1 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.557 1+0 records in 00:11:32.557 1+0 records out 00:11:32.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205555 s, 19.9 MB/s 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:32.557 14:36:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:32.557 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:32.557 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.557 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:32.557 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:32.557 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:32.557 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.557 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:32.815 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:11:33.073 /dev/nbd1 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.073 1+0 records in 00:11:33.073 1+0 records out 00:11:33.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290125 s, 14.1 MB/s 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.073 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.331 14:36:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:33.588 14:36:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:33.588 14:36:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:33.588 14:36:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:33.588 14:36:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.588 14:36:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.588 14:36:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:33.588 14:36:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:33.588 14:36:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.588 14:36:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:33.589 14:36:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76833 00:11:33.589 14:36:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 76833 ']' 00:11:33.589 14:36:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 76833 00:11:33.589 14:36:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:11:33.589 14:36:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:33.589 14:36:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76833 00:11:33.589 killing process with pid 76833 00:11:33.589 Received shutdown signal, test time was about 8.770151 seconds 00:11:33.589 00:11:33.589 Latency(us) 00:11:33.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.589 =================================================================================================================== 00:11:33.589 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:33.589 14:36:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:33.589 14:36:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:33.589 14:36:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76833' 00:11:33.589 14:36:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 76833 00:11:33.589 [2024-10-01 14:36:25.145084] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.589 14:36:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 76833 00:11:33.846 [2024-10-01 14:36:25.356824] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.413 14:36:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:34.413 ************************************ 00:11:34.413 END TEST raid_rebuild_test_io 00:11:34.413 ************************************ 00:11:34.413 00:11:34.413 real 0m11.313s 00:11:34.413 user 0m14.142s 00:11:34.413 sys 0m1.301s 00:11:34.413 14:36:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.413 14:36:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.673 14:36:26 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:11:34.673 14:36:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:34.673 14:36:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.673 14:36:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.673 ************************************ 00:11:34.673 START TEST raid_rebuild_test_sb_io 00:11:34.673 ************************************ 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:34.673 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:34.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77223 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77223 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 77223 ']' 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.674 14:36:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.674 [2024-10-01 14:36:26.183226] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:11:34.674 [2024-10-01 14:36:26.183545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77223 ] 00:11:34.674 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:34.674 Zero copy mechanism will not be used. 00:11:34.674 [2024-10-01 14:36:26.336822] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.932 [2024-10-01 14:36:26.525893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.193 [2024-10-01 14:36:26.662904] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.193 [2024-10-01 14:36:26.663044] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.452 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.452 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:11:35.452 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:35.452 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:35.452 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.452 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.452 BaseBdev1_malloc 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.453 [2024-10-01 14:36:27.073154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:35.453 [2024-10-01 14:36:27.073314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.453 [2024-10-01 14:36:27.073356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:35.453 [2024-10-01 14:36:27.073476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.453 [2024-10-01 14:36:27.075658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.453 [2024-10-01 14:36:27.075816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:35.453 BaseBdev1 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.453 BaseBdev2_malloc 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.453 [2024-10-01 14:36:27.127147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:35.453 [2024-10-01 14:36:27.127317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.453 [2024-10-01 14:36:27.127357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:35.453 [2024-10-01 14:36:27.127677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.453 [2024-10-01 14:36:27.130006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.453 [2024-10-01 14:36:27.130104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:35.453 BaseBdev2 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.453 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.713 BaseBdev3_malloc 00:11:35.713 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.713 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:35.713 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.713 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.713 [2024-10-01 14:36:27.171311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:35.713 [2024-10-01 14:36:27.171366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.713 [2024-10-01 14:36:27.171388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:35.713 [2024-10-01 14:36:27.171398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.713 [2024-10-01 14:36:27.173505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.713 BaseBdev3 00:11:35.713 [2024-10-01 14:36:27.173633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:35.713 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.714 BaseBdev4_malloc 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.714 [2024-10-01 14:36:27.211621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:35.714 [2024-10-01 14:36:27.211780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.714 [2024-10-01 14:36:27.211820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:35.714 [2024-10-01 14:36:27.212348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.714 [2024-10-01 14:36:27.218488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.714 [2024-10-01 14:36:27.218786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:35.714 BaseBdev4 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.714 spare_malloc 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.714 spare_delay 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.714 [2024-10-01 14:36:27.264555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:35.714 [2024-10-01 14:36:27.264697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.714 [2024-10-01 14:36:27.264747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:35.714 [2024-10-01 14:36:27.264760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.714 [2024-10-01 14:36:27.266930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.714 [2024-10-01 14:36:27.266992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:35.714 spare 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.714 [2024-10-01 14:36:27.272623] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.714 [2024-10-01 14:36:27.274485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.714 [2024-10-01 14:36:27.274646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.714 [2024-10-01 14:36:27.274715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:35.714 [2024-10-01 14:36:27.274908] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:35.714 [2024-10-01 14:36:27.274921] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:35.714 [2024-10-01 14:36:27.275198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:35.714 [2024-10-01 14:36:27.275356] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:35.714 [2024-10-01 14:36:27.275365] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:35.714 [2024-10-01 14:36:27.275514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.714 "name": "raid_bdev1", 00:11:35.714 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:35.714 "strip_size_kb": 0, 00:11:35.714 "state": "online", 00:11:35.714 "raid_level": "raid1", 00:11:35.714 "superblock": true, 00:11:35.714 "num_base_bdevs": 4, 00:11:35.714 "num_base_bdevs_discovered": 4, 00:11:35.714 "num_base_bdevs_operational": 4, 00:11:35.714 "base_bdevs_list": [ 00:11:35.714 { 00:11:35.714 "name": "BaseBdev1", 00:11:35.714 "uuid": "5ca2d27d-be9f-5350-b088-86661a137902", 00:11:35.714 "is_configured": true, 00:11:35.714 "data_offset": 2048, 00:11:35.714 "data_size": 63488 00:11:35.714 }, 00:11:35.714 { 00:11:35.714 "name": "BaseBdev2", 00:11:35.714 "uuid": "c49ab85e-c07d-5800-be18-bc29ac9f3b36", 00:11:35.714 "is_configured": true, 00:11:35.714 "data_offset": 2048, 00:11:35.714 "data_size": 63488 00:11:35.714 }, 00:11:35.714 { 00:11:35.714 "name": "BaseBdev3", 00:11:35.714 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:35.714 "is_configured": true, 00:11:35.714 "data_offset": 2048, 00:11:35.714 "data_size": 63488 00:11:35.714 }, 00:11:35.714 { 00:11:35.714 "name": "BaseBdev4", 00:11:35.714 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:35.714 "is_configured": true, 00:11:35.714 "data_offset": 2048, 00:11:35.714 "data_size": 63488 00:11:35.714 } 00:11:35.714 ] 00:11:35.714 }' 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.714 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.971 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:35.971 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:35.971 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.971 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.971 [2024-10-01 14:36:27.653043] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.230 [2024-10-01 14:36:27.720683] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.230 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.230 "name": "raid_bdev1", 00:11:36.230 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:36.230 "strip_size_kb": 0, 00:11:36.230 "state": "online", 00:11:36.230 "raid_level": "raid1", 00:11:36.230 "superblock": true, 00:11:36.230 "num_base_bdevs": 4, 00:11:36.230 "num_base_bdevs_discovered": 3, 00:11:36.230 "num_base_bdevs_operational": 3, 00:11:36.230 "base_bdevs_list": [ 00:11:36.230 { 00:11:36.230 "name": null, 00:11:36.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.230 "is_configured": false, 00:11:36.230 "data_offset": 0, 00:11:36.230 "data_size": 63488 00:11:36.230 }, 00:11:36.230 { 00:11:36.230 "name": "BaseBdev2", 00:11:36.230 "uuid": "c49ab85e-c07d-5800-be18-bc29ac9f3b36", 00:11:36.230 "is_configured": true, 00:11:36.230 "data_offset": 2048, 00:11:36.230 "data_size": 63488 00:11:36.230 }, 00:11:36.230 { 00:11:36.230 "name": "BaseBdev3", 00:11:36.230 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:36.230 "is_configured": true, 00:11:36.230 "data_offset": 2048, 00:11:36.231 "data_size": 63488 00:11:36.231 }, 00:11:36.231 { 00:11:36.231 "name": "BaseBdev4", 00:11:36.231 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:36.231 "is_configured": true, 00:11:36.231 "data_offset": 2048, 00:11:36.231 "data_size": 63488 00:11:36.231 } 00:11:36.231 ] 00:11:36.231 }' 00:11:36.231 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.231 14:36:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.231 [2024-10-01 14:36:27.806082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:36.231 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:36.231 Zero copy mechanism will not be used. 00:11:36.231 Running I/O for 60 seconds... 00:11:36.491 14:36:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:36.491 14:36:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.491 14:36:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.491 [2024-10-01 14:36:28.046237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:36.491 14:36:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.491 14:36:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:36.491 [2024-10-01 14:36:28.098747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:36.491 [2024-10-01 14:36:28.100668] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:36.753 [2024-10-01 14:36:28.210558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:36.753 [2024-10-01 14:36:28.211171] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:36.753 [2024-10-01 14:36:28.423181] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:36.753 [2024-10-01 14:36:28.423413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:37.321 115.00 IOPS, 345.00 MiB/s [2024-10-01 14:36:28.904205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.578 "name": "raid_bdev1", 00:11:37.578 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:37.578 "strip_size_kb": 0, 00:11:37.578 "state": "online", 00:11:37.578 "raid_level": "raid1", 00:11:37.578 "superblock": true, 00:11:37.578 "num_base_bdevs": 4, 00:11:37.578 "num_base_bdevs_discovered": 4, 00:11:37.578 "num_base_bdevs_operational": 4, 00:11:37.578 "process": { 00:11:37.578 "type": "rebuild", 00:11:37.578 "target": "spare", 00:11:37.578 "progress": { 00:11:37.578 "blocks": 12288, 00:11:37.578 "percent": 19 00:11:37.578 } 00:11:37.578 }, 00:11:37.578 "base_bdevs_list": [ 00:11:37.578 { 00:11:37.578 "name": "spare", 00:11:37.578 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:37.578 "is_configured": true, 00:11:37.578 "data_offset": 2048, 00:11:37.578 "data_size": 63488 00:11:37.578 }, 00:11:37.578 { 00:11:37.578 "name": "BaseBdev2", 00:11:37.578 "uuid": "c49ab85e-c07d-5800-be18-bc29ac9f3b36", 00:11:37.578 "is_configured": true, 00:11:37.578 "data_offset": 2048, 00:11:37.578 "data_size": 63488 00:11:37.578 }, 00:11:37.578 { 00:11:37.578 "name": "BaseBdev3", 00:11:37.578 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:37.578 "is_configured": true, 00:11:37.578 "data_offset": 2048, 00:11:37.578 "data_size": 63488 00:11:37.578 }, 00:11:37.578 { 00:11:37.578 "name": "BaseBdev4", 00:11:37.578 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:37.578 "is_configured": true, 00:11:37.578 "data_offset": 2048, 00:11:37.578 "data_size": 63488 00:11:37.578 } 00:11:37.578 ] 00:11:37.578 }' 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.578 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.578 [2024-10-01 14:36:29.218918] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:37.834 [2024-10-01 14:36:29.327651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:37.834 [2024-10-01 14:36:29.434595] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:37.834 [2024-10-01 14:36:29.439078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.834 [2024-10-01 14:36:29.439239] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:37.834 [2024-10-01 14:36:29.439278] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:37.834 [2024-10-01 14:36:29.456871] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:11:37.834 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.834 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.835 "name": "raid_bdev1", 00:11:37.835 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:37.835 "strip_size_kb": 0, 00:11:37.835 "state": "online", 00:11:37.835 "raid_level": "raid1", 00:11:37.835 "superblock": true, 00:11:37.835 "num_base_bdevs": 4, 00:11:37.835 "num_base_bdevs_discovered": 3, 00:11:37.835 "num_base_bdevs_operational": 3, 00:11:37.835 "base_bdevs_list": [ 00:11:37.835 { 00:11:37.835 "name": null, 00:11:37.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.835 "is_configured": false, 00:11:37.835 "data_offset": 0, 00:11:37.835 "data_size": 63488 00:11:37.835 }, 00:11:37.835 { 00:11:37.835 "name": "BaseBdev2", 00:11:37.835 "uuid": "c49ab85e-c07d-5800-be18-bc29ac9f3b36", 00:11:37.835 "is_configured": true, 00:11:37.835 "data_offset": 2048, 00:11:37.835 "data_size": 63488 00:11:37.835 }, 00:11:37.835 { 00:11:37.835 "name": "BaseBdev3", 00:11:37.835 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:37.835 "is_configured": true, 00:11:37.835 "data_offset": 2048, 00:11:37.835 "data_size": 63488 00:11:37.835 }, 00:11:37.835 { 00:11:37.835 "name": "BaseBdev4", 00:11:37.835 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:37.835 "is_configured": true, 00:11:37.835 "data_offset": 2048, 00:11:37.835 "data_size": 63488 00:11:37.835 } 00:11:37.835 ] 00:11:37.835 }' 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.835 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.399 "name": "raid_bdev1", 00:11:38.399 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:38.399 "strip_size_kb": 0, 00:11:38.399 "state": "online", 00:11:38.399 "raid_level": "raid1", 00:11:38.399 "superblock": true, 00:11:38.399 "num_base_bdevs": 4, 00:11:38.399 "num_base_bdevs_discovered": 3, 00:11:38.399 "num_base_bdevs_operational": 3, 00:11:38.399 "base_bdevs_list": [ 00:11:38.399 { 00:11:38.399 "name": null, 00:11:38.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.399 "is_configured": false, 00:11:38.399 "data_offset": 0, 00:11:38.399 "data_size": 63488 00:11:38.399 }, 00:11:38.399 { 00:11:38.399 "name": "BaseBdev2", 00:11:38.399 "uuid": "c49ab85e-c07d-5800-be18-bc29ac9f3b36", 00:11:38.399 "is_configured": true, 00:11:38.399 "data_offset": 2048, 00:11:38.399 "data_size": 63488 00:11:38.399 }, 00:11:38.399 { 00:11:38.399 "name": "BaseBdev3", 00:11:38.399 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:38.399 "is_configured": true, 00:11:38.399 "data_offset": 2048, 00:11:38.399 "data_size": 63488 00:11:38.399 }, 00:11:38.399 { 00:11:38.399 "name": "BaseBdev4", 00:11:38.399 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:38.399 "is_configured": true, 00:11:38.399 "data_offset": 2048, 00:11:38.399 "data_size": 63488 00:11:38.399 } 00:11:38.399 ] 00:11:38.399 }' 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.399 135.50 IOPS, 406.50 MiB/s 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.399 [2024-10-01 14:36:29.902513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.399 14:36:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:38.399 [2024-10-01 14:36:29.962940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:38.399 [2024-10-01 14:36:29.964932] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:38.399 [2024-10-01 14:36:30.071985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:38.399 [2024-10-01 14:36:30.073286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:38.656 [2024-10-01 14:36:30.284238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:38.656 [2024-10-01 14:36:30.284502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:38.913 [2024-10-01 14:36:30.569306] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:39.170 [2024-10-01 14:36:30.717964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:39.429 125.67 IOPS, 377.00 MiB/s 14:36:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:39.429 14:36:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.429 14:36:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:39.429 14:36:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:39.429 14:36:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.429 14:36:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.429 14:36:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.429 14:36:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.429 14:36:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.429 14:36:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.429 14:36:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.429 "name": "raid_bdev1", 00:11:39.429 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:39.429 "strip_size_kb": 0, 00:11:39.429 "state": "online", 00:11:39.429 "raid_level": "raid1", 00:11:39.429 "superblock": true, 00:11:39.429 "num_base_bdevs": 4, 00:11:39.429 "num_base_bdevs_discovered": 4, 00:11:39.429 "num_base_bdevs_operational": 4, 00:11:39.429 "process": { 00:11:39.429 "type": "rebuild", 00:11:39.429 "target": "spare", 00:11:39.429 "progress": { 00:11:39.429 "blocks": 12288, 00:11:39.429 "percent": 19 00:11:39.429 } 00:11:39.429 }, 00:11:39.429 "base_bdevs_list": [ 00:11:39.429 { 00:11:39.429 "name": "spare", 00:11:39.429 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:39.429 "is_configured": true, 00:11:39.429 "data_offset": 2048, 00:11:39.429 "data_size": 63488 00:11:39.429 }, 00:11:39.429 { 00:11:39.429 "name": "BaseBdev2", 00:11:39.429 "uuid": "c49ab85e-c07d-5800-be18-bc29ac9f3b36", 00:11:39.429 "is_configured": true, 00:11:39.429 "data_offset": 2048, 00:11:39.429 "data_size": 63488 00:11:39.429 }, 00:11:39.429 { 00:11:39.429 "name": "BaseBdev3", 00:11:39.429 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:39.429 "is_configured": true, 00:11:39.429 "data_offset": 2048, 00:11:39.429 "data_size": 63488 00:11:39.429 }, 00:11:39.429 { 00:11:39.429 "name": "BaseBdev4", 00:11:39.429 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:39.429 "is_configured": true, 00:11:39.429 "data_offset": 2048, 00:11:39.429 "data_size": 63488 00:11:39.429 } 00:11:39.429 ] 00:11:39.429 }' 00:11:39.429 14:36:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.429 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:39.429 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.429 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:39.429 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:39.429 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:39.429 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:39.429 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:39.429 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:39.429 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:39.429 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:39.429 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.429 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.429 [2024-10-01 14:36:31.052265] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.687 [2024-10-01 14:36:31.192331] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:11:39.687 [2024-10-01 14:36:31.192512] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:11:39.687 [2024-10-01 14:36:31.209838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.687 "name": "raid_bdev1", 00:11:39.687 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:39.687 "strip_size_kb": 0, 00:11:39.687 "state": "online", 00:11:39.687 "raid_level": "raid1", 00:11:39.687 "superblock": true, 00:11:39.687 "num_base_bdevs": 4, 00:11:39.687 "num_base_bdevs_discovered": 3, 00:11:39.687 "num_base_bdevs_operational": 3, 00:11:39.687 "process": { 00:11:39.687 "type": "rebuild", 00:11:39.687 "target": "spare", 00:11:39.687 "progress": { 00:11:39.687 "blocks": 14336, 00:11:39.687 "percent": 22 00:11:39.687 } 00:11:39.687 }, 00:11:39.687 "base_bdevs_list": [ 00:11:39.687 { 00:11:39.687 "name": "spare", 00:11:39.687 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:39.687 "is_configured": true, 00:11:39.687 "data_offset": 2048, 00:11:39.687 "data_size": 63488 00:11:39.687 }, 00:11:39.687 { 00:11:39.687 "name": null, 00:11:39.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.687 "is_configured": false, 00:11:39.687 "data_offset": 0, 00:11:39.687 "data_size": 63488 00:11:39.687 }, 00:11:39.687 { 00:11:39.687 "name": "BaseBdev3", 00:11:39.687 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:39.687 "is_configured": true, 00:11:39.687 "data_offset": 2048, 00:11:39.687 "data_size": 63488 00:11:39.687 }, 00:11:39.687 { 00:11:39.687 "name": "BaseBdev4", 00:11:39.687 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:39.687 "is_configured": true, 00:11:39.687 "data_offset": 2048, 00:11:39.687 "data_size": 63488 00:11:39.687 } 00:11:39.687 ] 00:11:39.687 }' 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=411 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.687 [2024-10-01 14:36:31.327382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:39.687 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.687 "name": "raid_bdev1", 00:11:39.687 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:39.687 "strip_size_kb": 0, 00:11:39.687 "state": "online", 00:11:39.687 "raid_level": "raid1", 00:11:39.687 "superblock": true, 00:11:39.687 "num_base_bdevs": 4, 00:11:39.687 "num_base_bdevs_discovered": 3, 00:11:39.687 "num_base_bdevs_operational": 3, 00:11:39.687 "process": { 00:11:39.687 "type": "rebuild", 00:11:39.687 "target": "spare", 00:11:39.687 "progress": { 00:11:39.687 "blocks": 14336, 00:11:39.687 "percent": 22 00:11:39.687 } 00:11:39.687 }, 00:11:39.687 "base_bdevs_list": [ 00:11:39.687 { 00:11:39.687 "name": "spare", 00:11:39.687 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:39.687 "is_configured": true, 00:11:39.687 "data_offset": 2048, 00:11:39.687 "data_size": 63488 00:11:39.687 }, 00:11:39.687 { 00:11:39.687 "name": null, 00:11:39.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.687 "is_configured": false, 00:11:39.687 "data_offset": 0, 00:11:39.687 "data_size": 63488 00:11:39.687 }, 00:11:39.687 { 00:11:39.687 "name": "BaseBdev3", 00:11:39.687 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:39.687 "is_configured": true, 00:11:39.687 "data_offset": 2048, 00:11:39.688 "data_size": 63488 00:11:39.688 }, 00:11:39.688 { 00:11:39.688 "name": "BaseBdev4", 00:11:39.688 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:39.688 "is_configured": true, 00:11:39.688 "data_offset": 2048, 00:11:39.688 "data_size": 63488 00:11:39.688 } 00:11:39.688 ] 00:11:39.688 }' 00:11:39.688 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.688 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:39.688 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.948 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:39.948 14:36:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:40.208 [2024-10-01 14:36:31.817179] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:40.470 108.25 IOPS, 324.75 MiB/s [2024-10-01 14:36:32.057416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:40.727 [2024-10-01 14:36:32.180003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:40.727 [2024-10-01 14:36:32.180435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:40.727 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:40.727 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:40.727 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.727 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:40.727 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:40.727 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.727 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.727 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.727 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.727 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.987 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.987 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.987 "name": "raid_bdev1", 00:11:40.987 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:40.987 "strip_size_kb": 0, 00:11:40.987 "state": "online", 00:11:40.987 "raid_level": "raid1", 00:11:40.987 "superblock": true, 00:11:40.987 "num_base_bdevs": 4, 00:11:40.987 "num_base_bdevs_discovered": 3, 00:11:40.987 "num_base_bdevs_operational": 3, 00:11:40.987 "process": { 00:11:40.987 "type": "rebuild", 00:11:40.987 "target": "spare", 00:11:40.987 "progress": { 00:11:40.987 "blocks": 30720, 00:11:40.987 "percent": 48 00:11:40.987 } 00:11:40.987 }, 00:11:40.987 "base_bdevs_list": [ 00:11:40.987 { 00:11:40.987 "name": "spare", 00:11:40.987 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:40.987 "is_configured": true, 00:11:40.987 "data_offset": 2048, 00:11:40.987 "data_size": 63488 00:11:40.987 }, 00:11:40.987 { 00:11:40.987 "name": null, 00:11:40.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.987 "is_configured": false, 00:11:40.987 "data_offset": 0, 00:11:40.987 "data_size": 63488 00:11:40.987 }, 00:11:40.987 { 00:11:40.987 "name": "BaseBdev3", 00:11:40.987 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:40.987 "is_configured": true, 00:11:40.987 "data_offset": 2048, 00:11:40.987 "data_size": 63488 00:11:40.987 }, 00:11:40.987 { 00:11:40.987 "name": "BaseBdev4", 00:11:40.987 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:40.987 "is_configured": true, 00:11:40.987 "data_offset": 2048, 00:11:40.987 "data_size": 63488 00:11:40.987 } 00:11:40.987 ] 00:11:40.987 }' 00:11:40.987 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.987 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:40.987 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.987 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:40.987 14:36:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:40.987 [2024-10-01 14:36:32.501400] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:41.246 [2024-10-01 14:36:32.726764] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:41.504 94.60 IOPS, 283.80 MiB/s [2024-10-01 14:36:32.951443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:42.075 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:42.075 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:42.075 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.075 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:42.075 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:42.075 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.075 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.075 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.075 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.075 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.075 [2024-10-01 14:36:33.510052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:42.075 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.075 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.075 "name": "raid_bdev1", 00:11:42.075 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:42.075 "strip_size_kb": 0, 00:11:42.075 "state": "online", 00:11:42.075 "raid_level": "raid1", 00:11:42.075 "superblock": true, 00:11:42.075 "num_base_bdevs": 4, 00:11:42.075 "num_base_bdevs_discovered": 3, 00:11:42.075 "num_base_bdevs_operational": 3, 00:11:42.075 "process": { 00:11:42.076 "type": "rebuild", 00:11:42.076 "target": "spare", 00:11:42.076 "progress": { 00:11:42.076 "blocks": 45056, 00:11:42.076 "percent": 70 00:11:42.076 } 00:11:42.076 }, 00:11:42.076 "base_bdevs_list": [ 00:11:42.076 { 00:11:42.076 "name": "spare", 00:11:42.076 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:42.076 "is_configured": true, 00:11:42.076 "data_offset": 2048, 00:11:42.076 "data_size": 63488 00:11:42.076 }, 00:11:42.076 { 00:11:42.076 "name": null, 00:11:42.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.076 "is_configured": false, 00:11:42.076 "data_offset": 0, 00:11:42.076 "data_size": 63488 00:11:42.076 }, 00:11:42.076 { 00:11:42.076 "name": "BaseBdev3", 00:11:42.076 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:42.076 "is_configured": true, 00:11:42.076 "data_offset": 2048, 00:11:42.076 "data_size": 63488 00:11:42.076 }, 00:11:42.076 { 00:11:42.076 "name": "BaseBdev4", 00:11:42.076 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:42.076 "is_configured": true, 00:11:42.076 "data_offset": 2048, 00:11:42.076 "data_size": 63488 00:11:42.076 } 00:11:42.076 ] 00:11:42.076 }' 00:11:42.076 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.076 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:42.076 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.076 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:42.076 14:36:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:42.076 [2024-10-01 14:36:33.744781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:42.905 86.00 IOPS, 258.00 MiB/s [2024-10-01 14:36:34.514521] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.164 [2024-10-01 14:36:34.620955] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:43.164 [2024-10-01 14:36:34.624328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.164 "name": "raid_bdev1", 00:11:43.164 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:43.164 "strip_size_kb": 0, 00:11:43.164 "state": "online", 00:11:43.164 "raid_level": "raid1", 00:11:43.164 "superblock": true, 00:11:43.164 "num_base_bdevs": 4, 00:11:43.164 "num_base_bdevs_discovered": 3, 00:11:43.164 "num_base_bdevs_operational": 3, 00:11:43.164 "process": { 00:11:43.164 "type": "rebuild", 00:11:43.164 "target": "spare", 00:11:43.164 "progress": { 00:11:43.164 "blocks": 63488, 00:11:43.164 "percent": 100 00:11:43.164 } 00:11:43.164 }, 00:11:43.164 "base_bdevs_list": [ 00:11:43.164 { 00:11:43.164 "name": "spare", 00:11:43.164 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:43.164 "is_configured": true, 00:11:43.164 "data_offset": 2048, 00:11:43.164 "data_size": 63488 00:11:43.164 }, 00:11:43.164 { 00:11:43.164 "name": null, 00:11:43.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.164 "is_configured": false, 00:11:43.164 "data_offset": 0, 00:11:43.164 "data_size": 63488 00:11:43.164 }, 00:11:43.164 { 00:11:43.164 "name": "BaseBdev3", 00:11:43.164 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:43.164 "is_configured": true, 00:11:43.164 "data_offset": 2048, 00:11:43.164 "data_size": 63488 00:11:43.164 }, 00:11:43.164 { 00:11:43.164 "name": "BaseBdev4", 00:11:43.164 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:43.164 "is_configured": true, 00:11:43.164 "data_offset": 2048, 00:11:43.164 "data_size": 63488 00:11:43.164 } 00:11:43.164 ] 00:11:43.164 }' 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.164 14:36:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:44.107 78.43 IOPS, 235.29 MiB/s 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.107 "name": "raid_bdev1", 00:11:44.107 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:44.107 "strip_size_kb": 0, 00:11:44.107 "state": "online", 00:11:44.107 "raid_level": "raid1", 00:11:44.107 "superblock": true, 00:11:44.107 "num_base_bdevs": 4, 00:11:44.107 "num_base_bdevs_discovered": 3, 00:11:44.107 "num_base_bdevs_operational": 3, 00:11:44.107 "base_bdevs_list": [ 00:11:44.107 { 00:11:44.107 "name": "spare", 00:11:44.107 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:44.107 "is_configured": true, 00:11:44.107 "data_offset": 2048, 00:11:44.107 "data_size": 63488 00:11:44.107 }, 00:11:44.107 { 00:11:44.107 "name": null, 00:11:44.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.107 "is_configured": false, 00:11:44.107 "data_offset": 0, 00:11:44.107 "data_size": 63488 00:11:44.107 }, 00:11:44.107 { 00:11:44.107 "name": "BaseBdev3", 00:11:44.107 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:44.107 "is_configured": true, 00:11:44.107 "data_offset": 2048, 00:11:44.107 "data_size": 63488 00:11:44.107 }, 00:11:44.107 { 00:11:44.107 "name": "BaseBdev4", 00:11:44.107 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:44.107 "is_configured": true, 00:11:44.107 "data_offset": 2048, 00:11:44.107 "data_size": 63488 00:11:44.107 } 00:11:44.107 ] 00:11:44.107 }' 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:44.107 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.369 73.38 IOPS, 220.12 MiB/s 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.369 "name": "raid_bdev1", 00:11:44.369 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:44.369 "strip_size_kb": 0, 00:11:44.369 "state": "online", 00:11:44.369 "raid_level": "raid1", 00:11:44.369 "superblock": true, 00:11:44.369 "num_base_bdevs": 4, 00:11:44.369 "num_base_bdevs_discovered": 3, 00:11:44.369 "num_base_bdevs_operational": 3, 00:11:44.369 "base_bdevs_list": [ 00:11:44.369 { 00:11:44.369 "name": "spare", 00:11:44.369 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:44.369 "is_configured": true, 00:11:44.369 "data_offset": 2048, 00:11:44.369 "data_size": 63488 00:11:44.369 }, 00:11:44.369 { 00:11:44.369 "name": null, 00:11:44.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.369 "is_configured": false, 00:11:44.369 "data_offset": 0, 00:11:44.369 "data_size": 63488 00:11:44.369 }, 00:11:44.369 { 00:11:44.369 "name": "BaseBdev3", 00:11:44.369 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:44.369 "is_configured": true, 00:11:44.369 "data_offset": 2048, 00:11:44.369 "data_size": 63488 00:11:44.369 }, 00:11:44.369 { 00:11:44.369 "name": "BaseBdev4", 00:11:44.369 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:44.369 "is_configured": true, 00:11:44.369 "data_offset": 2048, 00:11:44.369 "data_size": 63488 00:11:44.369 } 00:11:44.369 ] 00:11:44.369 }' 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.369 "name": "raid_bdev1", 00:11:44.369 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:44.369 "strip_size_kb": 0, 00:11:44.369 "state": "online", 00:11:44.369 "raid_level": "raid1", 00:11:44.369 "superblock": true, 00:11:44.369 "num_base_bdevs": 4, 00:11:44.369 "num_base_bdevs_discovered": 3, 00:11:44.369 "num_base_bdevs_operational": 3, 00:11:44.369 "base_bdevs_list": [ 00:11:44.369 { 00:11:44.369 "name": "spare", 00:11:44.369 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:44.369 "is_configured": true, 00:11:44.369 "data_offset": 2048, 00:11:44.369 "data_size": 63488 00:11:44.369 }, 00:11:44.369 { 00:11:44.369 "name": null, 00:11:44.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.369 "is_configured": false, 00:11:44.369 "data_offset": 0, 00:11:44.369 "data_size": 63488 00:11:44.369 }, 00:11:44.369 { 00:11:44.369 "name": "BaseBdev3", 00:11:44.369 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:44.369 "is_configured": true, 00:11:44.369 "data_offset": 2048, 00:11:44.369 "data_size": 63488 00:11:44.369 }, 00:11:44.369 { 00:11:44.369 "name": "BaseBdev4", 00:11:44.369 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:44.369 "is_configured": true, 00:11:44.369 "data_offset": 2048, 00:11:44.369 "data_size": 63488 00:11:44.369 } 00:11:44.369 ] 00:11:44.369 }' 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.369 14:36:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.631 [2024-10-01 14:36:36.231315] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:44.631 [2024-10-01 14:36:36.231343] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.631 00:11:44.631 Latency(us) 00:11:44.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.631 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:44.631 raid_bdev1 : 8.45 71.50 214.51 0.00 0.00 19330.72 329.26 111310.38 00:11:44.631 =================================================================================================================== 00:11:44.631 Total : 71.50 214.51 0.00 0.00 19330.72 329.26 111310.38 00:11:44.631 { 00:11:44.631 "results": [ 00:11:44.631 { 00:11:44.631 "job": "raid_bdev1", 00:11:44.631 "core_mask": "0x1", 00:11:44.631 "workload": "randrw", 00:11:44.631 "percentage": 50, 00:11:44.631 "status": "finished", 00:11:44.631 "queue_depth": 2, 00:11:44.631 "io_size": 3145728, 00:11:44.631 "runtime": 8.447007, 00:11:44.631 "iops": 71.50461696077676, 00:11:44.631 "mibps": 214.51385088233027, 00:11:44.631 "io_failed": 0, 00:11:44.631 "io_timeout": 0, 00:11:44.631 "avg_latency_us": 19330.724319918492, 00:11:44.631 "min_latency_us": 329.2553846153846, 00:11:44.631 "max_latency_us": 111310.37538461538 00:11:44.631 } 00:11:44.631 ], 00:11:44.631 "core_count": 1 00:11:44.631 } 00:11:44.631 [2024-10-01 14:36:36.270730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.631 [2024-10-01 14:36:36.270774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.631 [2024-10-01 14:36:36.270878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.631 [2024-10-01 14:36:36.270888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:44.631 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:44.890 /dev/nbd0 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:44.891 1+0 records in 00:11:44.891 1+0 records out 00:11:44.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226017 s, 18.1 MB/s 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:44.891 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:11:45.151 /dev/nbd1 00:11:45.151 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.152 1+0 records in 00:11:45.152 1+0 records out 00:11:45.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290279 s, 14.1 MB/s 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:45.152 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:45.412 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:45.412 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:45.412 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:45.413 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:45.413 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:45.413 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.413 14:36:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:45.674 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:45.674 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:45.674 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:45.674 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:45.675 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:11:45.675 /dev/nbd1 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.935 1+0 records in 00:11:45.935 1+0 records out 00:11:45.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295132 s, 13.9 MB/s 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.935 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.198 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.543 [2024-10-01 14:36:37.917196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:46.543 [2024-10-01 14:36:37.917249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.543 [2024-10-01 14:36:37.917270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:46.543 [2024-10-01 14:36:37.917280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.543 [2024-10-01 14:36:37.919518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.543 [2024-10-01 14:36:37.919553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:46.543 [2024-10-01 14:36:37.919642] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:46.543 [2024-10-01 14:36:37.919687] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:46.543 [2024-10-01 14:36:37.919847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.543 [2024-10-01 14:36:37.919939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:46.543 spare 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.543 14:36:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.543 [2024-10-01 14:36:38.020049] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:46.543 [2024-10-01 14:36:38.020096] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.543 [2024-10-01 14:36:38.020444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:11:46.543 [2024-10-01 14:36:38.020625] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:46.543 [2024-10-01 14:36:38.020650] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:11:46.543 [2024-10-01 14:36:38.020835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.543 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.543 "name": "raid_bdev1", 00:11:46.543 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:46.543 "strip_size_kb": 0, 00:11:46.543 "state": "online", 00:11:46.543 "raid_level": "raid1", 00:11:46.543 "superblock": true, 00:11:46.543 "num_base_bdevs": 4, 00:11:46.543 "num_base_bdevs_discovered": 3, 00:11:46.543 "num_base_bdevs_operational": 3, 00:11:46.543 "base_bdevs_list": [ 00:11:46.543 { 00:11:46.543 "name": "spare", 00:11:46.543 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:46.543 "is_configured": true, 00:11:46.543 "data_offset": 2048, 00:11:46.543 "data_size": 63488 00:11:46.543 }, 00:11:46.543 { 00:11:46.543 "name": null, 00:11:46.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.543 "is_configured": false, 00:11:46.543 "data_offset": 2048, 00:11:46.543 "data_size": 63488 00:11:46.543 }, 00:11:46.543 { 00:11:46.543 "name": "BaseBdev3", 00:11:46.543 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:46.543 "is_configured": true, 00:11:46.544 "data_offset": 2048, 00:11:46.544 "data_size": 63488 00:11:46.544 }, 00:11:46.544 { 00:11:46.544 "name": "BaseBdev4", 00:11:46.544 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:46.544 "is_configured": true, 00:11:46.544 "data_offset": 2048, 00:11:46.544 "data_size": 63488 00:11:46.544 } 00:11:46.544 ] 00:11:46.544 }' 00:11:46.544 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.544 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.809 "name": "raid_bdev1", 00:11:46.809 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:46.809 "strip_size_kb": 0, 00:11:46.809 "state": "online", 00:11:46.809 "raid_level": "raid1", 00:11:46.809 "superblock": true, 00:11:46.809 "num_base_bdevs": 4, 00:11:46.809 "num_base_bdevs_discovered": 3, 00:11:46.809 "num_base_bdevs_operational": 3, 00:11:46.809 "base_bdevs_list": [ 00:11:46.809 { 00:11:46.809 "name": "spare", 00:11:46.809 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:46.809 "is_configured": true, 00:11:46.809 "data_offset": 2048, 00:11:46.809 "data_size": 63488 00:11:46.809 }, 00:11:46.809 { 00:11:46.809 "name": null, 00:11:46.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.809 "is_configured": false, 00:11:46.809 "data_offset": 2048, 00:11:46.809 "data_size": 63488 00:11:46.809 }, 00:11:46.809 { 00:11:46.809 "name": "BaseBdev3", 00:11:46.809 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:46.809 "is_configured": true, 00:11:46.809 "data_offset": 2048, 00:11:46.809 "data_size": 63488 00:11:46.809 }, 00:11:46.809 { 00:11:46.809 "name": "BaseBdev4", 00:11:46.809 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:46.809 "is_configured": true, 00:11:46.809 "data_offset": 2048, 00:11:46.809 "data_size": 63488 00:11:46.809 } 00:11:46.809 ] 00:11:46.809 }' 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.809 [2024-10-01 14:36:38.473472] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.809 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.070 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.070 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.070 "name": "raid_bdev1", 00:11:47.070 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:47.070 "strip_size_kb": 0, 00:11:47.070 "state": "online", 00:11:47.070 "raid_level": "raid1", 00:11:47.070 "superblock": true, 00:11:47.070 "num_base_bdevs": 4, 00:11:47.070 "num_base_bdevs_discovered": 2, 00:11:47.070 "num_base_bdevs_operational": 2, 00:11:47.070 "base_bdevs_list": [ 00:11:47.070 { 00:11:47.070 "name": null, 00:11:47.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.070 "is_configured": false, 00:11:47.070 "data_offset": 0, 00:11:47.070 "data_size": 63488 00:11:47.070 }, 00:11:47.070 { 00:11:47.070 "name": null, 00:11:47.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.070 "is_configured": false, 00:11:47.070 "data_offset": 2048, 00:11:47.070 "data_size": 63488 00:11:47.070 }, 00:11:47.070 { 00:11:47.070 "name": "BaseBdev3", 00:11:47.070 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:47.070 "is_configured": true, 00:11:47.070 "data_offset": 2048, 00:11:47.070 "data_size": 63488 00:11:47.070 }, 00:11:47.070 { 00:11:47.070 "name": "BaseBdev4", 00:11:47.070 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:47.070 "is_configured": true, 00:11:47.070 "data_offset": 2048, 00:11:47.070 "data_size": 63488 00:11:47.070 } 00:11:47.070 ] 00:11:47.070 }' 00:11:47.070 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.070 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.330 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:47.330 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.330 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.330 [2024-10-01 14:36:38.801605] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:47.330 [2024-10-01 14:36:38.801809] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:11:47.330 [2024-10-01 14:36:38.801824] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:47.330 [2024-10-01 14:36:38.801866] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:47.330 [2024-10-01 14:36:38.810392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:11:47.330 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.330 14:36:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:47.330 [2024-10-01 14:36:38.812282] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.267 "name": "raid_bdev1", 00:11:48.267 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:48.267 "strip_size_kb": 0, 00:11:48.267 "state": "online", 00:11:48.267 "raid_level": "raid1", 00:11:48.267 "superblock": true, 00:11:48.267 "num_base_bdevs": 4, 00:11:48.267 "num_base_bdevs_discovered": 3, 00:11:48.267 "num_base_bdevs_operational": 3, 00:11:48.267 "process": { 00:11:48.267 "type": "rebuild", 00:11:48.267 "target": "spare", 00:11:48.267 "progress": { 00:11:48.267 "blocks": 20480, 00:11:48.267 "percent": 32 00:11:48.267 } 00:11:48.267 }, 00:11:48.267 "base_bdevs_list": [ 00:11:48.267 { 00:11:48.267 "name": "spare", 00:11:48.267 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:48.267 "is_configured": true, 00:11:48.267 "data_offset": 2048, 00:11:48.267 "data_size": 63488 00:11:48.267 }, 00:11:48.267 { 00:11:48.267 "name": null, 00:11:48.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.267 "is_configured": false, 00:11:48.267 "data_offset": 2048, 00:11:48.267 "data_size": 63488 00:11:48.267 }, 00:11:48.267 { 00:11:48.267 "name": "BaseBdev3", 00:11:48.267 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:48.267 "is_configured": true, 00:11:48.267 "data_offset": 2048, 00:11:48.267 "data_size": 63488 00:11:48.267 }, 00:11:48.267 { 00:11:48.267 "name": "BaseBdev4", 00:11:48.267 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:48.267 "is_configured": true, 00:11:48.267 "data_offset": 2048, 00:11:48.267 "data_size": 63488 00:11:48.267 } 00:11:48.267 ] 00:11:48.267 }' 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.267 14:36:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.267 [2024-10-01 14:36:39.918962] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:48.525 [2024-10-01 14:36:40.018241] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:48.525 [2024-10-01 14:36:40.018317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.525 [2024-10-01 14:36:40.018335] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:48.526 [2024-10-01 14:36:40.018343] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.526 "name": "raid_bdev1", 00:11:48.526 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:48.526 "strip_size_kb": 0, 00:11:48.526 "state": "online", 00:11:48.526 "raid_level": "raid1", 00:11:48.526 "superblock": true, 00:11:48.526 "num_base_bdevs": 4, 00:11:48.526 "num_base_bdevs_discovered": 2, 00:11:48.526 "num_base_bdevs_operational": 2, 00:11:48.526 "base_bdevs_list": [ 00:11:48.526 { 00:11:48.526 "name": null, 00:11:48.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.526 "is_configured": false, 00:11:48.526 "data_offset": 0, 00:11:48.526 "data_size": 63488 00:11:48.526 }, 00:11:48.526 { 00:11:48.526 "name": null, 00:11:48.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.526 "is_configured": false, 00:11:48.526 "data_offset": 2048, 00:11:48.526 "data_size": 63488 00:11:48.526 }, 00:11:48.526 { 00:11:48.526 "name": "BaseBdev3", 00:11:48.526 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:48.526 "is_configured": true, 00:11:48.526 "data_offset": 2048, 00:11:48.526 "data_size": 63488 00:11:48.526 }, 00:11:48.526 { 00:11:48.526 "name": "BaseBdev4", 00:11:48.526 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:48.526 "is_configured": true, 00:11:48.526 "data_offset": 2048, 00:11:48.526 "data_size": 63488 00:11:48.526 } 00:11:48.526 ] 00:11:48.526 }' 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.526 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.786 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:48.786 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.786 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.786 [2024-10-01 14:36:40.348320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:48.786 [2024-10-01 14:36:40.348376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.786 [2024-10-01 14:36:40.348402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:48.786 [2024-10-01 14:36:40.348411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.786 [2024-10-01 14:36:40.348861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.786 [2024-10-01 14:36:40.348887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:48.786 [2024-10-01 14:36:40.348975] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:48.786 [2024-10-01 14:36:40.348986] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:11:48.786 [2024-10-01 14:36:40.348997] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:48.786 [2024-10-01 14:36:40.349023] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:48.786 [2024-10-01 14:36:40.357901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:11:48.786 spare 00:11:48.786 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.786 14:36:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:48.786 [2024-10-01 14:36:40.359769] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:49.728 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:49.728 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.728 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:49.728 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:49.728 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.728 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.728 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.728 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.728 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.728 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.728 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.728 "name": "raid_bdev1", 00:11:49.728 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:49.728 "strip_size_kb": 0, 00:11:49.728 "state": "online", 00:11:49.728 "raid_level": "raid1", 00:11:49.728 "superblock": true, 00:11:49.728 "num_base_bdevs": 4, 00:11:49.728 "num_base_bdevs_discovered": 3, 00:11:49.728 "num_base_bdevs_operational": 3, 00:11:49.728 "process": { 00:11:49.728 "type": "rebuild", 00:11:49.728 "target": "spare", 00:11:49.728 "progress": { 00:11:49.728 "blocks": 20480, 00:11:49.728 "percent": 32 00:11:49.728 } 00:11:49.728 }, 00:11:49.728 "base_bdevs_list": [ 00:11:49.728 { 00:11:49.728 "name": "spare", 00:11:49.728 "uuid": "833da7f0-6cd3-5b09-8937-c5f02b0a9a0b", 00:11:49.728 "is_configured": true, 00:11:49.728 "data_offset": 2048, 00:11:49.728 "data_size": 63488 00:11:49.728 }, 00:11:49.728 { 00:11:49.728 "name": null, 00:11:49.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.728 "is_configured": false, 00:11:49.728 "data_offset": 2048, 00:11:49.728 "data_size": 63488 00:11:49.728 }, 00:11:49.728 { 00:11:49.728 "name": "BaseBdev3", 00:11:49.728 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:49.728 "is_configured": true, 00:11:49.728 "data_offset": 2048, 00:11:49.728 "data_size": 63488 00:11:49.728 }, 00:11:49.728 { 00:11:49.728 "name": "BaseBdev4", 00:11:49.728 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:49.728 "is_configured": true, 00:11:49.728 "data_offset": 2048, 00:11:49.728 "data_size": 63488 00:11:49.728 } 00:11:49.728 ] 00:11:49.728 }' 00:11:49.728 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.987 [2024-10-01 14:36:41.458018] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:49.987 [2024-10-01 14:36:41.465214] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:49.987 [2024-10-01 14:36:41.465271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.987 [2024-10-01 14:36:41.465286] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:49.987 [2024-10-01 14:36:41.465297] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.987 "name": "raid_bdev1", 00:11:49.987 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:49.987 "strip_size_kb": 0, 00:11:49.987 "state": "online", 00:11:49.987 "raid_level": "raid1", 00:11:49.987 "superblock": true, 00:11:49.987 "num_base_bdevs": 4, 00:11:49.987 "num_base_bdevs_discovered": 2, 00:11:49.987 "num_base_bdevs_operational": 2, 00:11:49.987 "base_bdevs_list": [ 00:11:49.987 { 00:11:49.987 "name": null, 00:11:49.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.987 "is_configured": false, 00:11:49.987 "data_offset": 0, 00:11:49.987 "data_size": 63488 00:11:49.987 }, 00:11:49.987 { 00:11:49.987 "name": null, 00:11:49.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.987 "is_configured": false, 00:11:49.987 "data_offset": 2048, 00:11:49.987 "data_size": 63488 00:11:49.987 }, 00:11:49.987 { 00:11:49.987 "name": "BaseBdev3", 00:11:49.987 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:49.987 "is_configured": true, 00:11:49.987 "data_offset": 2048, 00:11:49.987 "data_size": 63488 00:11:49.987 }, 00:11:49.987 { 00:11:49.987 "name": "BaseBdev4", 00:11:49.987 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:49.987 "is_configured": true, 00:11:49.987 "data_offset": 2048, 00:11:49.987 "data_size": 63488 00:11:49.987 } 00:11:49.987 ] 00:11:49.987 }' 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.987 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.247 "name": "raid_bdev1", 00:11:50.247 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:50.247 "strip_size_kb": 0, 00:11:50.247 "state": "online", 00:11:50.247 "raid_level": "raid1", 00:11:50.247 "superblock": true, 00:11:50.247 "num_base_bdevs": 4, 00:11:50.247 "num_base_bdevs_discovered": 2, 00:11:50.247 "num_base_bdevs_operational": 2, 00:11:50.247 "base_bdevs_list": [ 00:11:50.247 { 00:11:50.247 "name": null, 00:11:50.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.247 "is_configured": false, 00:11:50.247 "data_offset": 0, 00:11:50.247 "data_size": 63488 00:11:50.247 }, 00:11:50.247 { 00:11:50.247 "name": null, 00:11:50.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.247 "is_configured": false, 00:11:50.247 "data_offset": 2048, 00:11:50.247 "data_size": 63488 00:11:50.247 }, 00:11:50.247 { 00:11:50.247 "name": "BaseBdev3", 00:11:50.247 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:50.247 "is_configured": true, 00:11:50.247 "data_offset": 2048, 00:11:50.247 "data_size": 63488 00:11:50.247 }, 00:11:50.247 { 00:11:50.247 "name": "BaseBdev4", 00:11:50.247 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:50.247 "is_configured": true, 00:11:50.247 "data_offset": 2048, 00:11:50.247 "data_size": 63488 00:11:50.247 } 00:11:50.247 ] 00:11:50.247 }' 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.247 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.247 [2024-10-01 14:36:41.903348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:50.247 [2024-10-01 14:36:41.903404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.247 [2024-10-01 14:36:41.903421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:11:50.247 [2024-10-01 14:36:41.903431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.247 [2024-10-01 14:36:41.903848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.247 [2024-10-01 14:36:41.903871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:50.248 [2024-10-01 14:36:41.903938] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:50.248 [2024-10-01 14:36:41.903953] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:11:50.248 [2024-10-01 14:36:41.903960] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:50.248 [2024-10-01 14:36:41.903971] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:50.248 BaseBdev1 00:11:50.248 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.248 14:36:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.626 "name": "raid_bdev1", 00:11:51.626 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:51.626 "strip_size_kb": 0, 00:11:51.626 "state": "online", 00:11:51.626 "raid_level": "raid1", 00:11:51.626 "superblock": true, 00:11:51.626 "num_base_bdevs": 4, 00:11:51.626 "num_base_bdevs_discovered": 2, 00:11:51.626 "num_base_bdevs_operational": 2, 00:11:51.626 "base_bdevs_list": [ 00:11:51.626 { 00:11:51.626 "name": null, 00:11:51.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.626 "is_configured": false, 00:11:51.626 "data_offset": 0, 00:11:51.626 "data_size": 63488 00:11:51.626 }, 00:11:51.626 { 00:11:51.626 "name": null, 00:11:51.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.626 "is_configured": false, 00:11:51.626 "data_offset": 2048, 00:11:51.626 "data_size": 63488 00:11:51.626 }, 00:11:51.626 { 00:11:51.626 "name": "BaseBdev3", 00:11:51.626 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:51.626 "is_configured": true, 00:11:51.626 "data_offset": 2048, 00:11:51.626 "data_size": 63488 00:11:51.626 }, 00:11:51.626 { 00:11:51.626 "name": "BaseBdev4", 00:11:51.626 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:51.626 "is_configured": true, 00:11:51.626 "data_offset": 2048, 00:11:51.626 "data_size": 63488 00:11:51.626 } 00:11:51.626 ] 00:11:51.626 }' 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.626 14:36:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.626 "name": "raid_bdev1", 00:11:51.626 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:51.626 "strip_size_kb": 0, 00:11:51.626 "state": "online", 00:11:51.626 "raid_level": "raid1", 00:11:51.626 "superblock": true, 00:11:51.626 "num_base_bdevs": 4, 00:11:51.626 "num_base_bdevs_discovered": 2, 00:11:51.626 "num_base_bdevs_operational": 2, 00:11:51.626 "base_bdevs_list": [ 00:11:51.626 { 00:11:51.626 "name": null, 00:11:51.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.626 "is_configured": false, 00:11:51.626 "data_offset": 0, 00:11:51.626 "data_size": 63488 00:11:51.626 }, 00:11:51.626 { 00:11:51.626 "name": null, 00:11:51.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.626 "is_configured": false, 00:11:51.626 "data_offset": 2048, 00:11:51.626 "data_size": 63488 00:11:51.626 }, 00:11:51.626 { 00:11:51.626 "name": "BaseBdev3", 00:11:51.626 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:51.626 "is_configured": true, 00:11:51.626 "data_offset": 2048, 00:11:51.626 "data_size": 63488 00:11:51.626 }, 00:11:51.626 { 00:11:51.626 "name": "BaseBdev4", 00:11:51.626 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:51.626 "is_configured": true, 00:11:51.626 "data_offset": 2048, 00:11:51.626 "data_size": 63488 00:11:51.626 } 00:11:51.626 ] 00:11:51.626 }' 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:51.626 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.884 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:51.884 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:51.884 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:11:51.884 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:51.884 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:51.884 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:51.884 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:51.884 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:51.884 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:51.884 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.884 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.884 [2024-10-01 14:36:43.327849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.884 [2024-10-01 14:36:43.327976] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:11:51.884 [2024-10-01 14:36:43.327986] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:51.884 request: 00:11:51.884 { 00:11:51.884 "base_bdev": "BaseBdev1", 00:11:51.884 "raid_bdev": "raid_bdev1", 00:11:51.884 "method": "bdev_raid_add_base_bdev", 00:11:51.884 "req_id": 1 00:11:51.885 } 00:11:51.885 Got JSON-RPC error response 00:11:51.885 response: 00:11:51.885 { 00:11:51.885 "code": -22, 00:11:51.885 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:51.885 } 00:11:51.885 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:51.885 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:11:51.885 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:51.885 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:51.885 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:51.885 14:36:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.875 "name": "raid_bdev1", 00:11:52.875 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:52.875 "strip_size_kb": 0, 00:11:52.875 "state": "online", 00:11:52.875 "raid_level": "raid1", 00:11:52.875 "superblock": true, 00:11:52.875 "num_base_bdevs": 4, 00:11:52.875 "num_base_bdevs_discovered": 2, 00:11:52.875 "num_base_bdevs_operational": 2, 00:11:52.875 "base_bdevs_list": [ 00:11:52.875 { 00:11:52.875 "name": null, 00:11:52.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.875 "is_configured": false, 00:11:52.875 "data_offset": 0, 00:11:52.875 "data_size": 63488 00:11:52.875 }, 00:11:52.875 { 00:11:52.875 "name": null, 00:11:52.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.875 "is_configured": false, 00:11:52.875 "data_offset": 2048, 00:11:52.875 "data_size": 63488 00:11:52.875 }, 00:11:52.875 { 00:11:52.875 "name": "BaseBdev3", 00:11:52.875 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:52.875 "is_configured": true, 00:11:52.875 "data_offset": 2048, 00:11:52.875 "data_size": 63488 00:11:52.875 }, 00:11:52.875 { 00:11:52.875 "name": "BaseBdev4", 00:11:52.875 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:52.875 "is_configured": true, 00:11:52.875 "data_offset": 2048, 00:11:52.875 "data_size": 63488 00:11:52.875 } 00:11:52.875 ] 00:11:52.875 }' 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.875 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.134 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:53.134 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.135 "name": "raid_bdev1", 00:11:53.135 "uuid": "d343bea5-a2df-4e6c-9f17-da3ed58b5401", 00:11:53.135 "strip_size_kb": 0, 00:11:53.135 "state": "online", 00:11:53.135 "raid_level": "raid1", 00:11:53.135 "superblock": true, 00:11:53.135 "num_base_bdevs": 4, 00:11:53.135 "num_base_bdevs_discovered": 2, 00:11:53.135 "num_base_bdevs_operational": 2, 00:11:53.135 "base_bdevs_list": [ 00:11:53.135 { 00:11:53.135 "name": null, 00:11:53.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.135 "is_configured": false, 00:11:53.135 "data_offset": 0, 00:11:53.135 "data_size": 63488 00:11:53.135 }, 00:11:53.135 { 00:11:53.135 "name": null, 00:11:53.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.135 "is_configured": false, 00:11:53.135 "data_offset": 2048, 00:11:53.135 "data_size": 63488 00:11:53.135 }, 00:11:53.135 { 00:11:53.135 "name": "BaseBdev3", 00:11:53.135 "uuid": "20fdb125-c76d-5f36-8796-5e28cc809b2c", 00:11:53.135 "is_configured": true, 00:11:53.135 "data_offset": 2048, 00:11:53.135 "data_size": 63488 00:11:53.135 }, 00:11:53.135 { 00:11:53.135 "name": "BaseBdev4", 00:11:53.135 "uuid": "6878f767-3d76-5eab-83f8-39d2a25679e9", 00:11:53.135 "is_configured": true, 00:11:53.135 "data_offset": 2048, 00:11:53.135 "data_size": 63488 00:11:53.135 } 00:11:53.135 ] 00:11:53.135 }' 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77223 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 77223 ']' 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 77223 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77223 00:11:53.135 killing process with pid 77223 00:11:53.135 Received shutdown signal, test time was about 16.998043 seconds 00:11:53.135 00:11:53.135 Latency(us) 00:11:53.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.135 =================================================================================================================== 00:11:53.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77223' 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 77223 00:11:53.135 [2024-10-01 14:36:44.806304] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.135 14:36:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 77223 00:11:53.135 [2024-10-01 14:36:44.806401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.135 [2024-10-01 14:36:44.806465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.135 [2024-10-01 14:36:44.806474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:11:53.392 [2024-10-01 14:36:45.015694] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.325 14:36:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:54.325 00:11:54.325 real 0m19.595s 00:11:54.325 user 0m24.753s 00:11:54.325 sys 0m1.756s 00:11:54.325 14:36:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.325 14:36:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.325 ************************************ 00:11:54.325 END TEST raid_rebuild_test_sb_io 00:11:54.325 ************************************ 00:11:54.325 14:36:45 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:11:54.325 14:36:45 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:11:54.325 14:36:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:54.325 14:36:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.325 14:36:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.325 ************************************ 00:11:54.325 START TEST raid5f_state_function_test 00:11:54.325 ************************************ 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:54.325 Process raid pid: 77936 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77936 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77936' 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77936 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 77936 ']' 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.325 14:36:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.326 14:36:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.326 14:36:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.326 14:36:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.326 [2024-10-01 14:36:45.847564] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:11:54.326 [2024-10-01 14:36:45.847858] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.326 [2024-10-01 14:36:46.000507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.586 [2024-10-01 14:36:46.189065] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.846 [2024-10-01 14:36:46.330562] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.846 [2024-10-01 14:36:46.330592] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.107 [2024-10-01 14:36:46.742823] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.107 [2024-10-01 14:36:46.742873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.107 [2024-10-01 14:36:46.742883] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.107 [2024-10-01 14:36:46.742893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.107 [2024-10-01 14:36:46.742899] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.107 [2024-10-01 14:36:46.742909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.107 "name": "Existed_Raid", 00:11:55.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.107 "strip_size_kb": 64, 00:11:55.107 "state": "configuring", 00:11:55.107 "raid_level": "raid5f", 00:11:55.107 "superblock": false, 00:11:55.107 "num_base_bdevs": 3, 00:11:55.107 "num_base_bdevs_discovered": 0, 00:11:55.107 "num_base_bdevs_operational": 3, 00:11:55.107 "base_bdevs_list": [ 00:11:55.107 { 00:11:55.107 "name": "BaseBdev1", 00:11:55.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.107 "is_configured": false, 00:11:55.107 "data_offset": 0, 00:11:55.107 "data_size": 0 00:11:55.107 }, 00:11:55.107 { 00:11:55.107 "name": "BaseBdev2", 00:11:55.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.107 "is_configured": false, 00:11:55.107 "data_offset": 0, 00:11:55.107 "data_size": 0 00:11:55.107 }, 00:11:55.107 { 00:11:55.107 "name": "BaseBdev3", 00:11:55.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.107 "is_configured": false, 00:11:55.107 "data_offset": 0, 00:11:55.107 "data_size": 0 00:11:55.107 } 00:11:55.107 ] 00:11:55.107 }' 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.107 14:36:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.678 [2024-10-01 14:36:47.054838] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.678 [2024-10-01 14:36:47.054879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.678 [2024-10-01 14:36:47.062854] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.678 [2024-10-01 14:36:47.062898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.678 [2024-10-01 14:36:47.062907] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.678 [2024-10-01 14:36:47.062916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.678 [2024-10-01 14:36:47.062923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.678 [2024-10-01 14:36:47.062932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.678 [2024-10-01 14:36:47.111249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.678 BaseBdev1 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.678 [ 00:11:55.678 { 00:11:55.678 "name": "BaseBdev1", 00:11:55.678 "aliases": [ 00:11:55.678 "ec9300f7-226b-4d45-8589-44c5fb5985a7" 00:11:55.678 ], 00:11:55.678 "product_name": "Malloc disk", 00:11:55.678 "block_size": 512, 00:11:55.678 "num_blocks": 65536, 00:11:55.678 "uuid": "ec9300f7-226b-4d45-8589-44c5fb5985a7", 00:11:55.678 "assigned_rate_limits": { 00:11:55.678 "rw_ios_per_sec": 0, 00:11:55.678 "rw_mbytes_per_sec": 0, 00:11:55.678 "r_mbytes_per_sec": 0, 00:11:55.678 "w_mbytes_per_sec": 0 00:11:55.678 }, 00:11:55.678 "claimed": true, 00:11:55.678 "claim_type": "exclusive_write", 00:11:55.678 "zoned": false, 00:11:55.678 "supported_io_types": { 00:11:55.678 "read": true, 00:11:55.678 "write": true, 00:11:55.678 "unmap": true, 00:11:55.678 "flush": true, 00:11:55.678 "reset": true, 00:11:55.678 "nvme_admin": false, 00:11:55.678 "nvme_io": false, 00:11:55.678 "nvme_io_md": false, 00:11:55.678 "write_zeroes": true, 00:11:55.678 "zcopy": true, 00:11:55.678 "get_zone_info": false, 00:11:55.678 "zone_management": false, 00:11:55.678 "zone_append": false, 00:11:55.678 "compare": false, 00:11:55.678 "compare_and_write": false, 00:11:55.678 "abort": true, 00:11:55.678 "seek_hole": false, 00:11:55.678 "seek_data": false, 00:11:55.678 "copy": true, 00:11:55.678 "nvme_iov_md": false 00:11:55.678 }, 00:11:55.678 "memory_domains": [ 00:11:55.678 { 00:11:55.678 "dma_device_id": "system", 00:11:55.678 "dma_device_type": 1 00:11:55.678 }, 00:11:55.678 { 00:11:55.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.678 "dma_device_type": 2 00:11:55.678 } 00:11:55.678 ], 00:11:55.678 "driver_specific": {} 00:11:55.678 } 00:11:55.678 ] 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.678 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.678 "name": "Existed_Raid", 00:11:55.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.678 "strip_size_kb": 64, 00:11:55.678 "state": "configuring", 00:11:55.678 "raid_level": "raid5f", 00:11:55.678 "superblock": false, 00:11:55.678 "num_base_bdevs": 3, 00:11:55.678 "num_base_bdevs_discovered": 1, 00:11:55.678 "num_base_bdevs_operational": 3, 00:11:55.679 "base_bdevs_list": [ 00:11:55.679 { 00:11:55.679 "name": "BaseBdev1", 00:11:55.679 "uuid": "ec9300f7-226b-4d45-8589-44c5fb5985a7", 00:11:55.679 "is_configured": true, 00:11:55.679 "data_offset": 0, 00:11:55.679 "data_size": 65536 00:11:55.679 }, 00:11:55.679 { 00:11:55.679 "name": "BaseBdev2", 00:11:55.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.679 "is_configured": false, 00:11:55.679 "data_offset": 0, 00:11:55.679 "data_size": 0 00:11:55.679 }, 00:11:55.679 { 00:11:55.679 "name": "BaseBdev3", 00:11:55.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.679 "is_configured": false, 00:11:55.679 "data_offset": 0, 00:11:55.679 "data_size": 0 00:11:55.679 } 00:11:55.679 ] 00:11:55.679 }' 00:11:55.679 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.679 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.939 [2024-10-01 14:36:47.467392] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.939 [2024-10-01 14:36:47.467595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.939 [2024-10-01 14:36:47.475432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.939 [2024-10-01 14:36:47.477408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.939 [2024-10-01 14:36:47.477535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.939 [2024-10-01 14:36:47.477648] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.939 [2024-10-01 14:36:47.477694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.939 "name": "Existed_Raid", 00:11:55.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.939 "strip_size_kb": 64, 00:11:55.939 "state": "configuring", 00:11:55.939 "raid_level": "raid5f", 00:11:55.939 "superblock": false, 00:11:55.939 "num_base_bdevs": 3, 00:11:55.939 "num_base_bdevs_discovered": 1, 00:11:55.939 "num_base_bdevs_operational": 3, 00:11:55.939 "base_bdevs_list": [ 00:11:55.939 { 00:11:55.939 "name": "BaseBdev1", 00:11:55.939 "uuid": "ec9300f7-226b-4d45-8589-44c5fb5985a7", 00:11:55.939 "is_configured": true, 00:11:55.939 "data_offset": 0, 00:11:55.939 "data_size": 65536 00:11:55.939 }, 00:11:55.939 { 00:11:55.939 "name": "BaseBdev2", 00:11:55.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.939 "is_configured": false, 00:11:55.939 "data_offset": 0, 00:11:55.939 "data_size": 0 00:11:55.939 }, 00:11:55.939 { 00:11:55.939 "name": "BaseBdev3", 00:11:55.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.939 "is_configured": false, 00:11:55.939 "data_offset": 0, 00:11:55.939 "data_size": 0 00:11:55.939 } 00:11:55.939 ] 00:11:55.939 }' 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.939 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.201 [2024-10-01 14:36:47.822331] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.201 BaseBdev2 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.201 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.201 [ 00:11:56.201 { 00:11:56.201 "name": "BaseBdev2", 00:11:56.201 "aliases": [ 00:11:56.201 "7a18ec87-999e-4f66-aefd-eea021a141d1" 00:11:56.201 ], 00:11:56.201 "product_name": "Malloc disk", 00:11:56.201 "block_size": 512, 00:11:56.201 "num_blocks": 65536, 00:11:56.201 "uuid": "7a18ec87-999e-4f66-aefd-eea021a141d1", 00:11:56.201 "assigned_rate_limits": { 00:11:56.201 "rw_ios_per_sec": 0, 00:11:56.201 "rw_mbytes_per_sec": 0, 00:11:56.201 "r_mbytes_per_sec": 0, 00:11:56.201 "w_mbytes_per_sec": 0 00:11:56.201 }, 00:11:56.201 "claimed": true, 00:11:56.201 "claim_type": "exclusive_write", 00:11:56.201 "zoned": false, 00:11:56.201 "supported_io_types": { 00:11:56.201 "read": true, 00:11:56.201 "write": true, 00:11:56.201 "unmap": true, 00:11:56.201 "flush": true, 00:11:56.201 "reset": true, 00:11:56.201 "nvme_admin": false, 00:11:56.201 "nvme_io": false, 00:11:56.201 "nvme_io_md": false, 00:11:56.201 "write_zeroes": true, 00:11:56.201 "zcopy": true, 00:11:56.201 "get_zone_info": false, 00:11:56.201 "zone_management": false, 00:11:56.201 "zone_append": false, 00:11:56.202 "compare": false, 00:11:56.202 "compare_and_write": false, 00:11:56.202 "abort": true, 00:11:56.202 "seek_hole": false, 00:11:56.202 "seek_data": false, 00:11:56.202 "copy": true, 00:11:56.202 "nvme_iov_md": false 00:11:56.202 }, 00:11:56.202 "memory_domains": [ 00:11:56.202 { 00:11:56.202 "dma_device_id": "system", 00:11:56.202 "dma_device_type": 1 00:11:56.202 }, 00:11:56.202 { 00:11:56.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.202 "dma_device_type": 2 00:11:56.202 } 00:11:56.202 ], 00:11:56.202 "driver_specific": {} 00:11:56.202 } 00:11:56.202 ] 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.202 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.463 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.463 "name": "Existed_Raid", 00:11:56.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.463 "strip_size_kb": 64, 00:11:56.463 "state": "configuring", 00:11:56.463 "raid_level": "raid5f", 00:11:56.463 "superblock": false, 00:11:56.463 "num_base_bdevs": 3, 00:11:56.463 "num_base_bdevs_discovered": 2, 00:11:56.463 "num_base_bdevs_operational": 3, 00:11:56.463 "base_bdevs_list": [ 00:11:56.463 { 00:11:56.463 "name": "BaseBdev1", 00:11:56.463 "uuid": "ec9300f7-226b-4d45-8589-44c5fb5985a7", 00:11:56.463 "is_configured": true, 00:11:56.463 "data_offset": 0, 00:11:56.463 "data_size": 65536 00:11:56.463 }, 00:11:56.463 { 00:11:56.463 "name": "BaseBdev2", 00:11:56.463 "uuid": "7a18ec87-999e-4f66-aefd-eea021a141d1", 00:11:56.463 "is_configured": true, 00:11:56.463 "data_offset": 0, 00:11:56.463 "data_size": 65536 00:11:56.463 }, 00:11:56.463 { 00:11:56.463 "name": "BaseBdev3", 00:11:56.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.463 "is_configured": false, 00:11:56.463 "data_offset": 0, 00:11:56.463 "data_size": 0 00:11:56.463 } 00:11:56.463 ] 00:11:56.463 }' 00:11:56.463 14:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.463 14:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.729 [2024-10-01 14:36:48.206464] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.729 [2024-10-01 14:36:48.206520] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:56.729 [2024-10-01 14:36:48.206538] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:56.729 [2024-10-01 14:36:48.206818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:56.729 [2024-10-01 14:36:48.210600] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:56.729 [2024-10-01 14:36:48.210623] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:56.729 [2024-10-01 14:36:48.210902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.729 BaseBdev3 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.729 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.730 [ 00:11:56.730 { 00:11:56.730 "name": "BaseBdev3", 00:11:56.730 "aliases": [ 00:11:56.730 "39b60d91-49a5-4153-a68f-b1a0de743ef4" 00:11:56.730 ], 00:11:56.730 "product_name": "Malloc disk", 00:11:56.730 "block_size": 512, 00:11:56.730 "num_blocks": 65536, 00:11:56.730 "uuid": "39b60d91-49a5-4153-a68f-b1a0de743ef4", 00:11:56.730 "assigned_rate_limits": { 00:11:56.730 "rw_ios_per_sec": 0, 00:11:56.730 "rw_mbytes_per_sec": 0, 00:11:56.730 "r_mbytes_per_sec": 0, 00:11:56.730 "w_mbytes_per_sec": 0 00:11:56.730 }, 00:11:56.730 "claimed": true, 00:11:56.730 "claim_type": "exclusive_write", 00:11:56.730 "zoned": false, 00:11:56.730 "supported_io_types": { 00:11:56.730 "read": true, 00:11:56.730 "write": true, 00:11:56.730 "unmap": true, 00:11:56.730 "flush": true, 00:11:56.730 "reset": true, 00:11:56.730 "nvme_admin": false, 00:11:56.730 "nvme_io": false, 00:11:56.730 "nvme_io_md": false, 00:11:56.730 "write_zeroes": true, 00:11:56.730 "zcopy": true, 00:11:56.730 "get_zone_info": false, 00:11:56.730 "zone_management": false, 00:11:56.730 "zone_append": false, 00:11:56.730 "compare": false, 00:11:56.730 "compare_and_write": false, 00:11:56.730 "abort": true, 00:11:56.730 "seek_hole": false, 00:11:56.730 "seek_data": false, 00:11:56.730 "copy": true, 00:11:56.730 "nvme_iov_md": false 00:11:56.730 }, 00:11:56.730 "memory_domains": [ 00:11:56.730 { 00:11:56.730 "dma_device_id": "system", 00:11:56.730 "dma_device_type": 1 00:11:56.730 }, 00:11:56.730 { 00:11:56.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.730 "dma_device_type": 2 00:11:56.730 } 00:11:56.730 ], 00:11:56.730 "driver_specific": {} 00:11:56.730 } 00:11:56.730 ] 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.730 "name": "Existed_Raid", 00:11:56.730 "uuid": "b134d80e-6d37-4d96-a4ba-bae3db06b994", 00:11:56.730 "strip_size_kb": 64, 00:11:56.730 "state": "online", 00:11:56.730 "raid_level": "raid5f", 00:11:56.730 "superblock": false, 00:11:56.730 "num_base_bdevs": 3, 00:11:56.730 "num_base_bdevs_discovered": 3, 00:11:56.730 "num_base_bdevs_operational": 3, 00:11:56.730 "base_bdevs_list": [ 00:11:56.730 { 00:11:56.730 "name": "BaseBdev1", 00:11:56.730 "uuid": "ec9300f7-226b-4d45-8589-44c5fb5985a7", 00:11:56.730 "is_configured": true, 00:11:56.730 "data_offset": 0, 00:11:56.730 "data_size": 65536 00:11:56.730 }, 00:11:56.730 { 00:11:56.730 "name": "BaseBdev2", 00:11:56.730 "uuid": "7a18ec87-999e-4f66-aefd-eea021a141d1", 00:11:56.730 "is_configured": true, 00:11:56.730 "data_offset": 0, 00:11:56.730 "data_size": 65536 00:11:56.730 }, 00:11:56.730 { 00:11:56.730 "name": "BaseBdev3", 00:11:56.730 "uuid": "39b60d91-49a5-4153-a68f-b1a0de743ef4", 00:11:56.730 "is_configured": true, 00:11:56.730 "data_offset": 0, 00:11:56.730 "data_size": 65536 00:11:56.730 } 00:11:56.730 ] 00:11:56.730 }' 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.730 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.004 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.005 [2024-10-01 14:36:48.571331] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.005 "name": "Existed_Raid", 00:11:57.005 "aliases": [ 00:11:57.005 "b134d80e-6d37-4d96-a4ba-bae3db06b994" 00:11:57.005 ], 00:11:57.005 "product_name": "Raid Volume", 00:11:57.005 "block_size": 512, 00:11:57.005 "num_blocks": 131072, 00:11:57.005 "uuid": "b134d80e-6d37-4d96-a4ba-bae3db06b994", 00:11:57.005 "assigned_rate_limits": { 00:11:57.005 "rw_ios_per_sec": 0, 00:11:57.005 "rw_mbytes_per_sec": 0, 00:11:57.005 "r_mbytes_per_sec": 0, 00:11:57.005 "w_mbytes_per_sec": 0 00:11:57.005 }, 00:11:57.005 "claimed": false, 00:11:57.005 "zoned": false, 00:11:57.005 "supported_io_types": { 00:11:57.005 "read": true, 00:11:57.005 "write": true, 00:11:57.005 "unmap": false, 00:11:57.005 "flush": false, 00:11:57.005 "reset": true, 00:11:57.005 "nvme_admin": false, 00:11:57.005 "nvme_io": false, 00:11:57.005 "nvme_io_md": false, 00:11:57.005 "write_zeroes": true, 00:11:57.005 "zcopy": false, 00:11:57.005 "get_zone_info": false, 00:11:57.005 "zone_management": false, 00:11:57.005 "zone_append": false, 00:11:57.005 "compare": false, 00:11:57.005 "compare_and_write": false, 00:11:57.005 "abort": false, 00:11:57.005 "seek_hole": false, 00:11:57.005 "seek_data": false, 00:11:57.005 "copy": false, 00:11:57.005 "nvme_iov_md": false 00:11:57.005 }, 00:11:57.005 "driver_specific": { 00:11:57.005 "raid": { 00:11:57.005 "uuid": "b134d80e-6d37-4d96-a4ba-bae3db06b994", 00:11:57.005 "strip_size_kb": 64, 00:11:57.005 "state": "online", 00:11:57.005 "raid_level": "raid5f", 00:11:57.005 "superblock": false, 00:11:57.005 "num_base_bdevs": 3, 00:11:57.005 "num_base_bdevs_discovered": 3, 00:11:57.005 "num_base_bdevs_operational": 3, 00:11:57.005 "base_bdevs_list": [ 00:11:57.005 { 00:11:57.005 "name": "BaseBdev1", 00:11:57.005 "uuid": "ec9300f7-226b-4d45-8589-44c5fb5985a7", 00:11:57.005 "is_configured": true, 00:11:57.005 "data_offset": 0, 00:11:57.005 "data_size": 65536 00:11:57.005 }, 00:11:57.005 { 00:11:57.005 "name": "BaseBdev2", 00:11:57.005 "uuid": "7a18ec87-999e-4f66-aefd-eea021a141d1", 00:11:57.005 "is_configured": true, 00:11:57.005 "data_offset": 0, 00:11:57.005 "data_size": 65536 00:11:57.005 }, 00:11:57.005 { 00:11:57.005 "name": "BaseBdev3", 00:11:57.005 "uuid": "39b60d91-49a5-4153-a68f-b1a0de743ef4", 00:11:57.005 "is_configured": true, 00:11:57.005 "data_offset": 0, 00:11:57.005 "data_size": 65536 00:11:57.005 } 00:11:57.005 ] 00:11:57.005 } 00:11:57.005 } 00:11:57.005 }' 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:57.005 BaseBdev2 00:11:57.005 BaseBdev3' 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.005 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.264 [2024-10-01 14:36:48.759167] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.264 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.264 "name": "Existed_Raid", 00:11:57.265 "uuid": "b134d80e-6d37-4d96-a4ba-bae3db06b994", 00:11:57.265 "strip_size_kb": 64, 00:11:57.265 "state": "online", 00:11:57.265 "raid_level": "raid5f", 00:11:57.265 "superblock": false, 00:11:57.265 "num_base_bdevs": 3, 00:11:57.265 "num_base_bdevs_discovered": 2, 00:11:57.265 "num_base_bdevs_operational": 2, 00:11:57.265 "base_bdevs_list": [ 00:11:57.265 { 00:11:57.265 "name": null, 00:11:57.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.265 "is_configured": false, 00:11:57.265 "data_offset": 0, 00:11:57.265 "data_size": 65536 00:11:57.265 }, 00:11:57.265 { 00:11:57.265 "name": "BaseBdev2", 00:11:57.265 "uuid": "7a18ec87-999e-4f66-aefd-eea021a141d1", 00:11:57.265 "is_configured": true, 00:11:57.265 "data_offset": 0, 00:11:57.265 "data_size": 65536 00:11:57.265 }, 00:11:57.265 { 00:11:57.265 "name": "BaseBdev3", 00:11:57.265 "uuid": "39b60d91-49a5-4153-a68f-b1a0de743ef4", 00:11:57.265 "is_configured": true, 00:11:57.265 "data_offset": 0, 00:11:57.265 "data_size": 65536 00:11:57.265 } 00:11:57.265 ] 00:11:57.265 }' 00:11:57.265 14:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.265 14:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.525 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:57.525 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.525 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.525 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.525 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.525 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.525 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.525 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.525 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.525 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:57.525 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.525 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.525 [2024-10-01 14:36:49.186212] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:57.525 [2024-10-01 14:36:49.186303] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.787 [2024-10-01 14:36:49.243769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 [2024-10-01 14:36:49.283823] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:57.787 [2024-10-01 14:36:49.283877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 BaseBdev2 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.787 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 [ 00:11:57.787 { 00:11:57.787 "name": "BaseBdev2", 00:11:57.787 "aliases": [ 00:11:57.787 "01d56edd-49fb-4916-a99f-fd4367b99dbc" 00:11:57.787 ], 00:11:57.787 "product_name": "Malloc disk", 00:11:57.787 "block_size": 512, 00:11:57.787 "num_blocks": 65536, 00:11:57.787 "uuid": "01d56edd-49fb-4916-a99f-fd4367b99dbc", 00:11:57.787 "assigned_rate_limits": { 00:11:57.787 "rw_ios_per_sec": 0, 00:11:57.787 "rw_mbytes_per_sec": 0, 00:11:57.787 "r_mbytes_per_sec": 0, 00:11:57.787 "w_mbytes_per_sec": 0 00:11:57.787 }, 00:11:57.787 "claimed": false, 00:11:57.788 "zoned": false, 00:11:57.788 "supported_io_types": { 00:11:57.788 "read": true, 00:11:57.788 "write": true, 00:11:57.788 "unmap": true, 00:11:57.788 "flush": true, 00:11:57.788 "reset": true, 00:11:57.788 "nvme_admin": false, 00:11:57.788 "nvme_io": false, 00:11:57.788 "nvme_io_md": false, 00:11:57.788 "write_zeroes": true, 00:11:57.788 "zcopy": true, 00:11:57.788 "get_zone_info": false, 00:11:57.788 "zone_management": false, 00:11:57.788 "zone_append": false, 00:11:57.788 "compare": false, 00:11:57.788 "compare_and_write": false, 00:11:57.788 "abort": true, 00:11:57.788 "seek_hole": false, 00:11:57.788 "seek_data": false, 00:11:57.788 "copy": true, 00:11:57.788 "nvme_iov_md": false 00:11:57.788 }, 00:11:57.788 "memory_domains": [ 00:11:57.788 { 00:11:57.788 "dma_device_id": "system", 00:11:57.788 "dma_device_type": 1 00:11:57.788 }, 00:11:57.788 { 00:11:57.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.788 "dma_device_type": 2 00:11:57.788 } 00:11:57.788 ], 00:11:57.788 "driver_specific": {} 00:11:57.788 } 00:11:57.788 ] 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.788 BaseBdev3 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.788 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.048 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.048 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:58.048 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.048 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.048 [ 00:11:58.048 { 00:11:58.048 "name": "BaseBdev3", 00:11:58.048 "aliases": [ 00:11:58.048 "05629666-7712-4665-ade1-77f177207f9c" 00:11:58.048 ], 00:11:58.048 "product_name": "Malloc disk", 00:11:58.048 "block_size": 512, 00:11:58.048 "num_blocks": 65536, 00:11:58.048 "uuid": "05629666-7712-4665-ade1-77f177207f9c", 00:11:58.048 "assigned_rate_limits": { 00:11:58.048 "rw_ios_per_sec": 0, 00:11:58.048 "rw_mbytes_per_sec": 0, 00:11:58.048 "r_mbytes_per_sec": 0, 00:11:58.048 "w_mbytes_per_sec": 0 00:11:58.048 }, 00:11:58.048 "claimed": false, 00:11:58.048 "zoned": false, 00:11:58.048 "supported_io_types": { 00:11:58.048 "read": true, 00:11:58.048 "write": true, 00:11:58.048 "unmap": true, 00:11:58.048 "flush": true, 00:11:58.048 "reset": true, 00:11:58.048 "nvme_admin": false, 00:11:58.048 "nvme_io": false, 00:11:58.048 "nvme_io_md": false, 00:11:58.048 "write_zeroes": true, 00:11:58.048 "zcopy": true, 00:11:58.048 "get_zone_info": false, 00:11:58.048 "zone_management": false, 00:11:58.048 "zone_append": false, 00:11:58.048 "compare": false, 00:11:58.048 "compare_and_write": false, 00:11:58.048 "abort": true, 00:11:58.048 "seek_hole": false, 00:11:58.048 "seek_data": false, 00:11:58.048 "copy": true, 00:11:58.048 "nvme_iov_md": false 00:11:58.048 }, 00:11:58.048 "memory_domains": [ 00:11:58.048 { 00:11:58.049 "dma_device_id": "system", 00:11:58.049 "dma_device_type": 1 00:11:58.049 }, 00:11:58.049 { 00:11:58.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.049 "dma_device_type": 2 00:11:58.049 } 00:11:58.049 ], 00:11:58.049 "driver_specific": {} 00:11:58.049 } 00:11:58.049 ] 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.049 [2024-10-01 14:36:49.496571] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:58.049 [2024-10-01 14:36:49.496782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:58.049 [2024-10-01 14:36:49.496857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.049 [2024-10-01 14:36:49.498752] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.049 "name": "Existed_Raid", 00:11:58.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.049 "strip_size_kb": 64, 00:11:58.049 "state": "configuring", 00:11:58.049 "raid_level": "raid5f", 00:11:58.049 "superblock": false, 00:11:58.049 "num_base_bdevs": 3, 00:11:58.049 "num_base_bdevs_discovered": 2, 00:11:58.049 "num_base_bdevs_operational": 3, 00:11:58.049 "base_bdevs_list": [ 00:11:58.049 { 00:11:58.049 "name": "BaseBdev1", 00:11:58.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.049 "is_configured": false, 00:11:58.049 "data_offset": 0, 00:11:58.049 "data_size": 0 00:11:58.049 }, 00:11:58.049 { 00:11:58.049 "name": "BaseBdev2", 00:11:58.049 "uuid": "01d56edd-49fb-4916-a99f-fd4367b99dbc", 00:11:58.049 "is_configured": true, 00:11:58.049 "data_offset": 0, 00:11:58.049 "data_size": 65536 00:11:58.049 }, 00:11:58.049 { 00:11:58.049 "name": "BaseBdev3", 00:11:58.049 "uuid": "05629666-7712-4665-ade1-77f177207f9c", 00:11:58.049 "is_configured": true, 00:11:58.049 "data_offset": 0, 00:11:58.049 "data_size": 65536 00:11:58.049 } 00:11:58.049 ] 00:11:58.049 }' 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.049 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.307 [2024-10-01 14:36:49.824649] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.307 "name": "Existed_Raid", 00:11:58.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.307 "strip_size_kb": 64, 00:11:58.307 "state": "configuring", 00:11:58.307 "raid_level": "raid5f", 00:11:58.307 "superblock": false, 00:11:58.307 "num_base_bdevs": 3, 00:11:58.307 "num_base_bdevs_discovered": 1, 00:11:58.307 "num_base_bdevs_operational": 3, 00:11:58.307 "base_bdevs_list": [ 00:11:58.307 { 00:11:58.307 "name": "BaseBdev1", 00:11:58.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.307 "is_configured": false, 00:11:58.307 "data_offset": 0, 00:11:58.307 "data_size": 0 00:11:58.307 }, 00:11:58.307 { 00:11:58.307 "name": null, 00:11:58.307 "uuid": "01d56edd-49fb-4916-a99f-fd4367b99dbc", 00:11:58.307 "is_configured": false, 00:11:58.307 "data_offset": 0, 00:11:58.307 "data_size": 65536 00:11:58.307 }, 00:11:58.307 { 00:11:58.307 "name": "BaseBdev3", 00:11:58.307 "uuid": "05629666-7712-4665-ade1-77f177207f9c", 00:11:58.307 "is_configured": true, 00:11:58.307 "data_offset": 0, 00:11:58.307 "data_size": 65536 00:11:58.307 } 00:11:58.307 ] 00:11:58.307 }' 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.307 14:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.565 [2024-10-01 14:36:50.216527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.565 BaseBdev1 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.565 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.566 [ 00:11:58.566 { 00:11:58.566 "name": "BaseBdev1", 00:11:58.566 "aliases": [ 00:11:58.566 "621f2132-7d69-481f-8506-f5bf58f2facf" 00:11:58.566 ], 00:11:58.566 "product_name": "Malloc disk", 00:11:58.566 "block_size": 512, 00:11:58.566 "num_blocks": 65536, 00:11:58.566 "uuid": "621f2132-7d69-481f-8506-f5bf58f2facf", 00:11:58.566 "assigned_rate_limits": { 00:11:58.566 "rw_ios_per_sec": 0, 00:11:58.566 "rw_mbytes_per_sec": 0, 00:11:58.566 "r_mbytes_per_sec": 0, 00:11:58.566 "w_mbytes_per_sec": 0 00:11:58.566 }, 00:11:58.566 "claimed": true, 00:11:58.566 "claim_type": "exclusive_write", 00:11:58.566 "zoned": false, 00:11:58.566 "supported_io_types": { 00:11:58.566 "read": true, 00:11:58.566 "write": true, 00:11:58.566 "unmap": true, 00:11:58.566 "flush": true, 00:11:58.566 "reset": true, 00:11:58.566 "nvme_admin": false, 00:11:58.566 "nvme_io": false, 00:11:58.566 "nvme_io_md": false, 00:11:58.566 "write_zeroes": true, 00:11:58.566 "zcopy": true, 00:11:58.566 "get_zone_info": false, 00:11:58.566 "zone_management": false, 00:11:58.566 "zone_append": false, 00:11:58.566 "compare": false, 00:11:58.566 "compare_and_write": false, 00:11:58.566 "abort": true, 00:11:58.566 "seek_hole": false, 00:11:58.566 "seek_data": false, 00:11:58.566 "copy": true, 00:11:58.566 "nvme_iov_md": false 00:11:58.566 }, 00:11:58.566 "memory_domains": [ 00:11:58.566 { 00:11:58.566 "dma_device_id": "system", 00:11:58.566 "dma_device_type": 1 00:11:58.566 }, 00:11:58.566 { 00:11:58.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.566 "dma_device_type": 2 00:11:58.566 } 00:11:58.566 ], 00:11:58.566 "driver_specific": {} 00:11:58.566 } 00:11:58.566 ] 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.566 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.824 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.825 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.825 "name": "Existed_Raid", 00:11:58.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.825 "strip_size_kb": 64, 00:11:58.825 "state": "configuring", 00:11:58.825 "raid_level": "raid5f", 00:11:58.825 "superblock": false, 00:11:58.825 "num_base_bdevs": 3, 00:11:58.825 "num_base_bdevs_discovered": 2, 00:11:58.825 "num_base_bdevs_operational": 3, 00:11:58.825 "base_bdevs_list": [ 00:11:58.825 { 00:11:58.825 "name": "BaseBdev1", 00:11:58.825 "uuid": "621f2132-7d69-481f-8506-f5bf58f2facf", 00:11:58.825 "is_configured": true, 00:11:58.825 "data_offset": 0, 00:11:58.825 "data_size": 65536 00:11:58.825 }, 00:11:58.825 { 00:11:58.825 "name": null, 00:11:58.825 "uuid": "01d56edd-49fb-4916-a99f-fd4367b99dbc", 00:11:58.825 "is_configured": false, 00:11:58.825 "data_offset": 0, 00:11:58.825 "data_size": 65536 00:11:58.825 }, 00:11:58.825 { 00:11:58.825 "name": "BaseBdev3", 00:11:58.825 "uuid": "05629666-7712-4665-ade1-77f177207f9c", 00:11:58.825 "is_configured": true, 00:11:58.825 "data_offset": 0, 00:11:58.825 "data_size": 65536 00:11:58.825 } 00:11:58.825 ] 00:11:58.825 }' 00:11:58.825 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.825 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.084 [2024-10-01 14:36:50.600725] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.084 "name": "Existed_Raid", 00:11:59.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.084 "strip_size_kb": 64, 00:11:59.084 "state": "configuring", 00:11:59.084 "raid_level": "raid5f", 00:11:59.084 "superblock": false, 00:11:59.084 "num_base_bdevs": 3, 00:11:59.084 "num_base_bdevs_discovered": 1, 00:11:59.084 "num_base_bdevs_operational": 3, 00:11:59.084 "base_bdevs_list": [ 00:11:59.084 { 00:11:59.084 "name": "BaseBdev1", 00:11:59.084 "uuid": "621f2132-7d69-481f-8506-f5bf58f2facf", 00:11:59.084 "is_configured": true, 00:11:59.084 "data_offset": 0, 00:11:59.084 "data_size": 65536 00:11:59.084 }, 00:11:59.084 { 00:11:59.084 "name": null, 00:11:59.084 "uuid": "01d56edd-49fb-4916-a99f-fd4367b99dbc", 00:11:59.084 "is_configured": false, 00:11:59.084 "data_offset": 0, 00:11:59.084 "data_size": 65536 00:11:59.084 }, 00:11:59.084 { 00:11:59.084 "name": null, 00:11:59.084 "uuid": "05629666-7712-4665-ade1-77f177207f9c", 00:11:59.084 "is_configured": false, 00:11:59.084 "data_offset": 0, 00:11:59.084 "data_size": 65536 00:11:59.084 } 00:11:59.084 ] 00:11:59.084 }' 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.084 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.342 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:59.342 14:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.342 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.342 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.342 14:36:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.342 [2024-10-01 14:36:51.016811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.342 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.601 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.601 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.601 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.601 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.601 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.601 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.601 "name": "Existed_Raid", 00:11:59.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.601 "strip_size_kb": 64, 00:11:59.601 "state": "configuring", 00:11:59.601 "raid_level": "raid5f", 00:11:59.601 "superblock": false, 00:11:59.601 "num_base_bdevs": 3, 00:11:59.601 "num_base_bdevs_discovered": 2, 00:11:59.601 "num_base_bdevs_operational": 3, 00:11:59.601 "base_bdevs_list": [ 00:11:59.601 { 00:11:59.601 "name": "BaseBdev1", 00:11:59.601 "uuid": "621f2132-7d69-481f-8506-f5bf58f2facf", 00:11:59.601 "is_configured": true, 00:11:59.601 "data_offset": 0, 00:11:59.601 "data_size": 65536 00:11:59.601 }, 00:11:59.601 { 00:11:59.601 "name": null, 00:11:59.601 "uuid": "01d56edd-49fb-4916-a99f-fd4367b99dbc", 00:11:59.601 "is_configured": false, 00:11:59.601 "data_offset": 0, 00:11:59.601 "data_size": 65536 00:11:59.601 }, 00:11:59.601 { 00:11:59.601 "name": "BaseBdev3", 00:11:59.601 "uuid": "05629666-7712-4665-ade1-77f177207f9c", 00:11:59.601 "is_configured": true, 00:11:59.601 "data_offset": 0, 00:11:59.601 "data_size": 65536 00:11:59.601 } 00:11:59.601 ] 00:11:59.601 }' 00:11:59.601 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.601 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.858 [2024-10-01 14:36:51.376946] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.858 "name": "Existed_Raid", 00:11:59.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.858 "strip_size_kb": 64, 00:11:59.858 "state": "configuring", 00:11:59.858 "raid_level": "raid5f", 00:11:59.858 "superblock": false, 00:11:59.858 "num_base_bdevs": 3, 00:11:59.858 "num_base_bdevs_discovered": 1, 00:11:59.858 "num_base_bdevs_operational": 3, 00:11:59.858 "base_bdevs_list": [ 00:11:59.858 { 00:11:59.858 "name": null, 00:11:59.858 "uuid": "621f2132-7d69-481f-8506-f5bf58f2facf", 00:11:59.858 "is_configured": false, 00:11:59.858 "data_offset": 0, 00:11:59.858 "data_size": 65536 00:11:59.858 }, 00:11:59.858 { 00:11:59.858 "name": null, 00:11:59.858 "uuid": "01d56edd-49fb-4916-a99f-fd4367b99dbc", 00:11:59.858 "is_configured": false, 00:11:59.858 "data_offset": 0, 00:11:59.858 "data_size": 65536 00:11:59.858 }, 00:11:59.858 { 00:11:59.858 "name": "BaseBdev3", 00:11:59.858 "uuid": "05629666-7712-4665-ade1-77f177207f9c", 00:11:59.858 "is_configured": true, 00:11:59.858 "data_offset": 0, 00:11:59.858 "data_size": 65536 00:11:59.858 } 00:11:59.858 ] 00:11:59.858 }' 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.858 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.115 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.115 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.115 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.115 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:00.115 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.374 [2024-10-01 14:36:51.804018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.374 "name": "Existed_Raid", 00:12:00.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.374 "strip_size_kb": 64, 00:12:00.374 "state": "configuring", 00:12:00.374 "raid_level": "raid5f", 00:12:00.374 "superblock": false, 00:12:00.374 "num_base_bdevs": 3, 00:12:00.374 "num_base_bdevs_discovered": 2, 00:12:00.374 "num_base_bdevs_operational": 3, 00:12:00.374 "base_bdevs_list": [ 00:12:00.374 { 00:12:00.374 "name": null, 00:12:00.374 "uuid": "621f2132-7d69-481f-8506-f5bf58f2facf", 00:12:00.374 "is_configured": false, 00:12:00.374 "data_offset": 0, 00:12:00.374 "data_size": 65536 00:12:00.374 }, 00:12:00.374 { 00:12:00.374 "name": "BaseBdev2", 00:12:00.374 "uuid": "01d56edd-49fb-4916-a99f-fd4367b99dbc", 00:12:00.374 "is_configured": true, 00:12:00.374 "data_offset": 0, 00:12:00.374 "data_size": 65536 00:12:00.374 }, 00:12:00.374 { 00:12:00.374 "name": "BaseBdev3", 00:12:00.374 "uuid": "05629666-7712-4665-ade1-77f177207f9c", 00:12:00.374 "is_configured": true, 00:12:00.374 "data_offset": 0, 00:12:00.374 "data_size": 65536 00:12:00.374 } 00:12:00.374 ] 00:12:00.374 }' 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.374 14:36:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 621f2132-7d69-481f-8506-f5bf58f2facf 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.713 [2024-10-01 14:36:52.249337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:00.713 [2024-10-01 14:36:52.249390] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:00.713 [2024-10-01 14:36:52.249400] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:00.713 [2024-10-01 14:36:52.249661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:00.713 [2024-10-01 14:36:52.253389] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:00.713 [2024-10-01 14:36:52.253410] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:00.713 [2024-10-01 14:36:52.253677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.713 NewBaseBdev 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.713 [ 00:12:00.713 { 00:12:00.713 "name": "NewBaseBdev", 00:12:00.713 "aliases": [ 00:12:00.713 "621f2132-7d69-481f-8506-f5bf58f2facf" 00:12:00.713 ], 00:12:00.713 "product_name": "Malloc disk", 00:12:00.713 "block_size": 512, 00:12:00.713 "num_blocks": 65536, 00:12:00.713 "uuid": "621f2132-7d69-481f-8506-f5bf58f2facf", 00:12:00.713 "assigned_rate_limits": { 00:12:00.713 "rw_ios_per_sec": 0, 00:12:00.713 "rw_mbytes_per_sec": 0, 00:12:00.713 "r_mbytes_per_sec": 0, 00:12:00.713 "w_mbytes_per_sec": 0 00:12:00.713 }, 00:12:00.713 "claimed": true, 00:12:00.713 "claim_type": "exclusive_write", 00:12:00.713 "zoned": false, 00:12:00.713 "supported_io_types": { 00:12:00.713 "read": true, 00:12:00.713 "write": true, 00:12:00.713 "unmap": true, 00:12:00.713 "flush": true, 00:12:00.713 "reset": true, 00:12:00.713 "nvme_admin": false, 00:12:00.713 "nvme_io": false, 00:12:00.713 "nvme_io_md": false, 00:12:00.713 "write_zeroes": true, 00:12:00.713 "zcopy": true, 00:12:00.713 "get_zone_info": false, 00:12:00.713 "zone_management": false, 00:12:00.713 "zone_append": false, 00:12:00.713 "compare": false, 00:12:00.713 "compare_and_write": false, 00:12:00.713 "abort": true, 00:12:00.713 "seek_hole": false, 00:12:00.713 "seek_data": false, 00:12:00.713 "copy": true, 00:12:00.713 "nvme_iov_md": false 00:12:00.713 }, 00:12:00.713 "memory_domains": [ 00:12:00.713 { 00:12:00.713 "dma_device_id": "system", 00:12:00.713 "dma_device_type": 1 00:12:00.713 }, 00:12:00.713 { 00:12:00.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.713 "dma_device_type": 2 00:12:00.713 } 00:12:00.713 ], 00:12:00.713 "driver_specific": {} 00:12:00.713 } 00:12:00.713 ] 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.713 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.713 "name": "Existed_Raid", 00:12:00.713 "uuid": "965ad49d-2f64-46e9-ae1f-00091b569b1c", 00:12:00.713 "strip_size_kb": 64, 00:12:00.713 "state": "online", 00:12:00.713 "raid_level": "raid5f", 00:12:00.713 "superblock": false, 00:12:00.713 "num_base_bdevs": 3, 00:12:00.713 "num_base_bdevs_discovered": 3, 00:12:00.714 "num_base_bdevs_operational": 3, 00:12:00.714 "base_bdevs_list": [ 00:12:00.714 { 00:12:00.714 "name": "NewBaseBdev", 00:12:00.714 "uuid": "621f2132-7d69-481f-8506-f5bf58f2facf", 00:12:00.714 "is_configured": true, 00:12:00.714 "data_offset": 0, 00:12:00.714 "data_size": 65536 00:12:00.714 }, 00:12:00.714 { 00:12:00.714 "name": "BaseBdev2", 00:12:00.714 "uuid": "01d56edd-49fb-4916-a99f-fd4367b99dbc", 00:12:00.714 "is_configured": true, 00:12:00.714 "data_offset": 0, 00:12:00.714 "data_size": 65536 00:12:00.714 }, 00:12:00.714 { 00:12:00.714 "name": "BaseBdev3", 00:12:00.714 "uuid": "05629666-7712-4665-ade1-77f177207f9c", 00:12:00.714 "is_configured": true, 00:12:00.714 "data_offset": 0, 00:12:00.714 "data_size": 65536 00:12:00.714 } 00:12:00.714 ] 00:12:00.714 }' 00:12:00.714 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.714 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.982 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:00.982 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:00.982 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:00.982 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:00.982 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:00.982 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:00.982 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:00.982 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:00.982 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.982 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.982 [2024-10-01 14:36:52.618990] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.982 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.982 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:00.982 "name": "Existed_Raid", 00:12:00.982 "aliases": [ 00:12:00.982 "965ad49d-2f64-46e9-ae1f-00091b569b1c" 00:12:00.982 ], 00:12:00.982 "product_name": "Raid Volume", 00:12:00.982 "block_size": 512, 00:12:00.982 "num_blocks": 131072, 00:12:00.983 "uuid": "965ad49d-2f64-46e9-ae1f-00091b569b1c", 00:12:00.983 "assigned_rate_limits": { 00:12:00.983 "rw_ios_per_sec": 0, 00:12:00.983 "rw_mbytes_per_sec": 0, 00:12:00.983 "r_mbytes_per_sec": 0, 00:12:00.983 "w_mbytes_per_sec": 0 00:12:00.983 }, 00:12:00.983 "claimed": false, 00:12:00.983 "zoned": false, 00:12:00.983 "supported_io_types": { 00:12:00.983 "read": true, 00:12:00.983 "write": true, 00:12:00.983 "unmap": false, 00:12:00.983 "flush": false, 00:12:00.983 "reset": true, 00:12:00.983 "nvme_admin": false, 00:12:00.983 "nvme_io": false, 00:12:00.983 "nvme_io_md": false, 00:12:00.983 "write_zeroes": true, 00:12:00.983 "zcopy": false, 00:12:00.983 "get_zone_info": false, 00:12:00.983 "zone_management": false, 00:12:00.983 "zone_append": false, 00:12:00.983 "compare": false, 00:12:00.983 "compare_and_write": false, 00:12:00.983 "abort": false, 00:12:00.983 "seek_hole": false, 00:12:00.983 "seek_data": false, 00:12:00.983 "copy": false, 00:12:00.983 "nvme_iov_md": false 00:12:00.983 }, 00:12:00.983 "driver_specific": { 00:12:00.983 "raid": { 00:12:00.983 "uuid": "965ad49d-2f64-46e9-ae1f-00091b569b1c", 00:12:00.983 "strip_size_kb": 64, 00:12:00.983 "state": "online", 00:12:00.983 "raid_level": "raid5f", 00:12:00.983 "superblock": false, 00:12:00.983 "num_base_bdevs": 3, 00:12:00.983 "num_base_bdevs_discovered": 3, 00:12:00.983 "num_base_bdevs_operational": 3, 00:12:00.983 "base_bdevs_list": [ 00:12:00.983 { 00:12:00.983 "name": "NewBaseBdev", 00:12:00.983 "uuid": "621f2132-7d69-481f-8506-f5bf58f2facf", 00:12:00.983 "is_configured": true, 00:12:00.983 "data_offset": 0, 00:12:00.983 "data_size": 65536 00:12:00.983 }, 00:12:00.983 { 00:12:00.983 "name": "BaseBdev2", 00:12:00.983 "uuid": "01d56edd-49fb-4916-a99f-fd4367b99dbc", 00:12:00.983 "is_configured": true, 00:12:00.983 "data_offset": 0, 00:12:00.983 "data_size": 65536 00:12:00.983 }, 00:12:00.983 { 00:12:00.983 "name": "BaseBdev3", 00:12:00.983 "uuid": "05629666-7712-4665-ade1-77f177207f9c", 00:12:00.983 "is_configured": true, 00:12:00.983 "data_offset": 0, 00:12:00.983 "data_size": 65536 00:12:00.983 } 00:12:00.983 ] 00:12:00.983 } 00:12:00.983 } 00:12:00.983 }' 00:12:00.983 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:01.242 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:01.242 BaseBdev2 00:12:01.242 BaseBdev3' 00:12:01.242 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.242 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:01.242 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.243 [2024-10-01 14:36:52.814751] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.243 [2024-10-01 14:36:52.814786] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:01.243 [2024-10-01 14:36:52.814883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:01.243 [2024-10-01 14:36:52.815201] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:01.243 [2024-10-01 14:36:52.815221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77936 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 77936 ']' 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 77936 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77936 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:01.243 killing process with pid 77936 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77936' 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 77936 00:12:01.243 [2024-10-01 14:36:52.848090] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:01.243 14:36:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 77936 00:12:01.501 [2024-10-01 14:36:53.066616] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:02.438 00:12:02.438 real 0m8.228s 00:12:02.438 user 0m12.872s 00:12:02.438 sys 0m1.392s 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.438 ************************************ 00:12:02.438 END TEST raid5f_state_function_test 00:12:02.438 ************************************ 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.438 14:36:54 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:12:02.438 14:36:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:02.438 14:36:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.438 14:36:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.438 ************************************ 00:12:02.438 START TEST raid5f_state_function_test_sb 00:12:02.438 ************************************ 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78535 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78535' 00:12:02.438 Process raid pid: 78535 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78535 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78535 ']' 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:02.438 14:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 [2024-10-01 14:36:54.156323] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:12:02.699 [2024-10-01 14:36:54.156483] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.699 [2024-10-01 14:36:54.313699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.960 [2024-10-01 14:36:54.556958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.219 [2024-10-01 14:36:54.726812] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.219 [2024-10-01 14:36:54.726879] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.477 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.478 [2024-10-01 14:36:55.020241] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.478 [2024-10-01 14:36:55.020300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.478 [2024-10-01 14:36:55.020311] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.478 [2024-10-01 14:36:55.020321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.478 [2024-10-01 14:36:55.020327] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.478 [2024-10-01 14:36:55.020336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.478 "name": "Existed_Raid", 00:12:03.478 "uuid": "d0499bcd-8a50-4dde-af1e-65df469d277b", 00:12:03.478 "strip_size_kb": 64, 00:12:03.478 "state": "configuring", 00:12:03.478 "raid_level": "raid5f", 00:12:03.478 "superblock": true, 00:12:03.478 "num_base_bdevs": 3, 00:12:03.478 "num_base_bdevs_discovered": 0, 00:12:03.478 "num_base_bdevs_operational": 3, 00:12:03.478 "base_bdevs_list": [ 00:12:03.478 { 00:12:03.478 "name": "BaseBdev1", 00:12:03.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.478 "is_configured": false, 00:12:03.478 "data_offset": 0, 00:12:03.478 "data_size": 0 00:12:03.478 }, 00:12:03.478 { 00:12:03.478 "name": "BaseBdev2", 00:12:03.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.478 "is_configured": false, 00:12:03.478 "data_offset": 0, 00:12:03.478 "data_size": 0 00:12:03.478 }, 00:12:03.478 { 00:12:03.478 "name": "BaseBdev3", 00:12:03.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.478 "is_configured": false, 00:12:03.478 "data_offset": 0, 00:12:03.478 "data_size": 0 00:12:03.478 } 00:12:03.478 ] 00:12:03.478 }' 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.478 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.739 [2024-10-01 14:36:55.344230] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.739 [2024-10-01 14:36:55.344290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.739 [2024-10-01 14:36:55.356257] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.739 [2024-10-01 14:36:55.356311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.739 [2024-10-01 14:36:55.356320] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.739 [2024-10-01 14:36:55.356329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.739 [2024-10-01 14:36:55.356336] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.739 [2024-10-01 14:36:55.356345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.739 [2024-10-01 14:36:55.406601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.739 BaseBdev1 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.739 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.999 [ 00:12:03.999 { 00:12:03.999 "name": "BaseBdev1", 00:12:03.999 "aliases": [ 00:12:03.999 "f25b65ae-2b45-4d86-a5a6-01f7623ce3c5" 00:12:03.999 ], 00:12:03.999 "product_name": "Malloc disk", 00:12:03.999 "block_size": 512, 00:12:03.999 "num_blocks": 65536, 00:12:03.999 "uuid": "f25b65ae-2b45-4d86-a5a6-01f7623ce3c5", 00:12:03.999 "assigned_rate_limits": { 00:12:03.999 "rw_ios_per_sec": 0, 00:12:03.999 "rw_mbytes_per_sec": 0, 00:12:03.999 "r_mbytes_per_sec": 0, 00:12:03.999 "w_mbytes_per_sec": 0 00:12:03.999 }, 00:12:03.999 "claimed": true, 00:12:03.999 "claim_type": "exclusive_write", 00:12:03.999 "zoned": false, 00:12:03.999 "supported_io_types": { 00:12:03.999 "read": true, 00:12:03.999 "write": true, 00:12:03.999 "unmap": true, 00:12:03.999 "flush": true, 00:12:03.999 "reset": true, 00:12:03.999 "nvme_admin": false, 00:12:03.999 "nvme_io": false, 00:12:03.999 "nvme_io_md": false, 00:12:03.999 "write_zeroes": true, 00:12:03.999 "zcopy": true, 00:12:03.999 "get_zone_info": false, 00:12:03.999 "zone_management": false, 00:12:03.999 "zone_append": false, 00:12:03.999 "compare": false, 00:12:03.999 "compare_and_write": false, 00:12:03.999 "abort": true, 00:12:03.999 "seek_hole": false, 00:12:03.999 "seek_data": false, 00:12:03.999 "copy": true, 00:12:03.999 "nvme_iov_md": false 00:12:03.999 }, 00:12:03.999 "memory_domains": [ 00:12:03.999 { 00:12:03.999 "dma_device_id": "system", 00:12:03.999 "dma_device_type": 1 00:12:03.999 }, 00:12:03.999 { 00:12:03.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.999 "dma_device_type": 2 00:12:03.999 } 00:12:03.999 ], 00:12:03.999 "driver_specific": {} 00:12:03.999 } 00:12:03.999 ] 00:12:03.999 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.999 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:03.999 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:03.999 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.999 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.999 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:03.999 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.999 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.000 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.000 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.000 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.000 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.000 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.000 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.000 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.000 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.000 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.000 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.000 "name": "Existed_Raid", 00:12:04.000 "uuid": "824cdb11-e293-445f-aa81-3267ef607f36", 00:12:04.000 "strip_size_kb": 64, 00:12:04.000 "state": "configuring", 00:12:04.000 "raid_level": "raid5f", 00:12:04.000 "superblock": true, 00:12:04.000 "num_base_bdevs": 3, 00:12:04.000 "num_base_bdevs_discovered": 1, 00:12:04.000 "num_base_bdevs_operational": 3, 00:12:04.000 "base_bdevs_list": [ 00:12:04.000 { 00:12:04.000 "name": "BaseBdev1", 00:12:04.000 "uuid": "f25b65ae-2b45-4d86-a5a6-01f7623ce3c5", 00:12:04.000 "is_configured": true, 00:12:04.000 "data_offset": 2048, 00:12:04.000 "data_size": 63488 00:12:04.000 }, 00:12:04.000 { 00:12:04.000 "name": "BaseBdev2", 00:12:04.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.000 "is_configured": false, 00:12:04.000 "data_offset": 0, 00:12:04.000 "data_size": 0 00:12:04.000 }, 00:12:04.000 { 00:12:04.000 "name": "BaseBdev3", 00:12:04.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.000 "is_configured": false, 00:12:04.000 "data_offset": 0, 00:12:04.000 "data_size": 0 00:12:04.000 } 00:12:04.000 ] 00:12:04.000 }' 00:12:04.000 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.000 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.261 [2024-10-01 14:36:55.758763] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:04.261 [2024-10-01 14:36:55.758847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.261 [2024-10-01 14:36:55.766819] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.261 [2024-10-01 14:36:55.769070] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:04.261 [2024-10-01 14:36:55.769130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:04.261 [2024-10-01 14:36:55.769141] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:04.261 [2024-10-01 14:36:55.769151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.261 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.261 "name": "Existed_Raid", 00:12:04.261 "uuid": "a05b660a-a932-4bbb-b8d4-5a881b86bd40", 00:12:04.261 "strip_size_kb": 64, 00:12:04.261 "state": "configuring", 00:12:04.261 "raid_level": "raid5f", 00:12:04.261 "superblock": true, 00:12:04.261 "num_base_bdevs": 3, 00:12:04.261 "num_base_bdevs_discovered": 1, 00:12:04.261 "num_base_bdevs_operational": 3, 00:12:04.262 "base_bdevs_list": [ 00:12:04.262 { 00:12:04.262 "name": "BaseBdev1", 00:12:04.262 "uuid": "f25b65ae-2b45-4d86-a5a6-01f7623ce3c5", 00:12:04.262 "is_configured": true, 00:12:04.262 "data_offset": 2048, 00:12:04.262 "data_size": 63488 00:12:04.262 }, 00:12:04.262 { 00:12:04.262 "name": "BaseBdev2", 00:12:04.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.262 "is_configured": false, 00:12:04.262 "data_offset": 0, 00:12:04.262 "data_size": 0 00:12:04.262 }, 00:12:04.262 { 00:12:04.262 "name": "BaseBdev3", 00:12:04.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.262 "is_configured": false, 00:12:04.262 "data_offset": 0, 00:12:04.262 "data_size": 0 00:12:04.262 } 00:12:04.262 ] 00:12:04.262 }' 00:12:04.262 14:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.262 14:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.523 [2024-10-01 14:36:56.105102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.523 BaseBdev2 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.523 [ 00:12:04.523 { 00:12:04.523 "name": "BaseBdev2", 00:12:04.523 "aliases": [ 00:12:04.523 "adc3edcb-1790-43a8-a0b7-809166cdb75e" 00:12:04.523 ], 00:12:04.523 "product_name": "Malloc disk", 00:12:04.523 "block_size": 512, 00:12:04.523 "num_blocks": 65536, 00:12:04.523 "uuid": "adc3edcb-1790-43a8-a0b7-809166cdb75e", 00:12:04.523 "assigned_rate_limits": { 00:12:04.523 "rw_ios_per_sec": 0, 00:12:04.523 "rw_mbytes_per_sec": 0, 00:12:04.523 "r_mbytes_per_sec": 0, 00:12:04.523 "w_mbytes_per_sec": 0 00:12:04.523 }, 00:12:04.523 "claimed": true, 00:12:04.523 "claim_type": "exclusive_write", 00:12:04.523 "zoned": false, 00:12:04.523 "supported_io_types": { 00:12:04.523 "read": true, 00:12:04.523 "write": true, 00:12:04.523 "unmap": true, 00:12:04.523 "flush": true, 00:12:04.523 "reset": true, 00:12:04.523 "nvme_admin": false, 00:12:04.523 "nvme_io": false, 00:12:04.523 "nvme_io_md": false, 00:12:04.523 "write_zeroes": true, 00:12:04.523 "zcopy": true, 00:12:04.523 "get_zone_info": false, 00:12:04.523 "zone_management": false, 00:12:04.523 "zone_append": false, 00:12:04.523 "compare": false, 00:12:04.523 "compare_and_write": false, 00:12:04.523 "abort": true, 00:12:04.523 "seek_hole": false, 00:12:04.523 "seek_data": false, 00:12:04.523 "copy": true, 00:12:04.523 "nvme_iov_md": false 00:12:04.523 }, 00:12:04.523 "memory_domains": [ 00:12:04.523 { 00:12:04.523 "dma_device_id": "system", 00:12:04.523 "dma_device_type": 1 00:12:04.523 }, 00:12:04.523 { 00:12:04.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.523 "dma_device_type": 2 00:12:04.523 } 00:12:04.523 ], 00:12:04.523 "driver_specific": {} 00:12:04.523 } 00:12:04.523 ] 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.523 "name": "Existed_Raid", 00:12:04.523 "uuid": "a05b660a-a932-4bbb-b8d4-5a881b86bd40", 00:12:04.523 "strip_size_kb": 64, 00:12:04.523 "state": "configuring", 00:12:04.523 "raid_level": "raid5f", 00:12:04.523 "superblock": true, 00:12:04.523 "num_base_bdevs": 3, 00:12:04.523 "num_base_bdevs_discovered": 2, 00:12:04.523 "num_base_bdevs_operational": 3, 00:12:04.523 "base_bdevs_list": [ 00:12:04.523 { 00:12:04.523 "name": "BaseBdev1", 00:12:04.523 "uuid": "f25b65ae-2b45-4d86-a5a6-01f7623ce3c5", 00:12:04.523 "is_configured": true, 00:12:04.523 "data_offset": 2048, 00:12:04.523 "data_size": 63488 00:12:04.523 }, 00:12:04.523 { 00:12:04.523 "name": "BaseBdev2", 00:12:04.523 "uuid": "adc3edcb-1790-43a8-a0b7-809166cdb75e", 00:12:04.523 "is_configured": true, 00:12:04.523 "data_offset": 2048, 00:12:04.523 "data_size": 63488 00:12:04.523 }, 00:12:04.523 { 00:12:04.523 "name": "BaseBdev3", 00:12:04.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.523 "is_configured": false, 00:12:04.523 "data_offset": 0, 00:12:04.523 "data_size": 0 00:12:04.523 } 00:12:04.523 ] 00:12:04.523 }' 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.523 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.783 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:04.783 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.783 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 [2024-10-01 14:36:56.493557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.044 [2024-10-01 14:36:56.493901] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:05.044 [2024-10-01 14:36:56.493930] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:05.044 [2024-10-01 14:36:56.494224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:05.044 BaseBdev3 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 [2024-10-01 14:36:56.498268] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:05.044 [2024-10-01 14:36:56.498301] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:05.044 [2024-10-01 14:36:56.498590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 [ 00:12:05.044 { 00:12:05.044 "name": "BaseBdev3", 00:12:05.044 "aliases": [ 00:12:05.044 "d53d212e-9844-4bf1-9c90-2fd2658b02ab" 00:12:05.044 ], 00:12:05.044 "product_name": "Malloc disk", 00:12:05.044 "block_size": 512, 00:12:05.044 "num_blocks": 65536, 00:12:05.044 "uuid": "d53d212e-9844-4bf1-9c90-2fd2658b02ab", 00:12:05.044 "assigned_rate_limits": { 00:12:05.044 "rw_ios_per_sec": 0, 00:12:05.044 "rw_mbytes_per_sec": 0, 00:12:05.044 "r_mbytes_per_sec": 0, 00:12:05.044 "w_mbytes_per_sec": 0 00:12:05.044 }, 00:12:05.044 "claimed": true, 00:12:05.044 "claim_type": "exclusive_write", 00:12:05.044 "zoned": false, 00:12:05.044 "supported_io_types": { 00:12:05.044 "read": true, 00:12:05.044 "write": true, 00:12:05.044 "unmap": true, 00:12:05.044 "flush": true, 00:12:05.044 "reset": true, 00:12:05.044 "nvme_admin": false, 00:12:05.044 "nvme_io": false, 00:12:05.044 "nvme_io_md": false, 00:12:05.044 "write_zeroes": true, 00:12:05.044 "zcopy": true, 00:12:05.044 "get_zone_info": false, 00:12:05.044 "zone_management": false, 00:12:05.044 "zone_append": false, 00:12:05.044 "compare": false, 00:12:05.044 "compare_and_write": false, 00:12:05.044 "abort": true, 00:12:05.044 "seek_hole": false, 00:12:05.044 "seek_data": false, 00:12:05.044 "copy": true, 00:12:05.044 "nvme_iov_md": false 00:12:05.044 }, 00:12:05.044 "memory_domains": [ 00:12:05.044 { 00:12:05.044 "dma_device_id": "system", 00:12:05.044 "dma_device_type": 1 00:12:05.044 }, 00:12:05.044 { 00:12:05.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.044 "dma_device_type": 2 00:12:05.044 } 00:12:05.044 ], 00:12:05.044 "driver_specific": {} 00:12:05.044 } 00:12:05.044 ] 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.044 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.044 "name": "Existed_Raid", 00:12:05.044 "uuid": "a05b660a-a932-4bbb-b8d4-5a881b86bd40", 00:12:05.044 "strip_size_kb": 64, 00:12:05.044 "state": "online", 00:12:05.044 "raid_level": "raid5f", 00:12:05.044 "superblock": true, 00:12:05.044 "num_base_bdevs": 3, 00:12:05.044 "num_base_bdevs_discovered": 3, 00:12:05.044 "num_base_bdevs_operational": 3, 00:12:05.044 "base_bdevs_list": [ 00:12:05.044 { 00:12:05.044 "name": "BaseBdev1", 00:12:05.044 "uuid": "f25b65ae-2b45-4d86-a5a6-01f7623ce3c5", 00:12:05.044 "is_configured": true, 00:12:05.044 "data_offset": 2048, 00:12:05.044 "data_size": 63488 00:12:05.045 }, 00:12:05.045 { 00:12:05.045 "name": "BaseBdev2", 00:12:05.045 "uuid": "adc3edcb-1790-43a8-a0b7-809166cdb75e", 00:12:05.045 "is_configured": true, 00:12:05.045 "data_offset": 2048, 00:12:05.045 "data_size": 63488 00:12:05.045 }, 00:12:05.045 { 00:12:05.045 "name": "BaseBdev3", 00:12:05.045 "uuid": "d53d212e-9844-4bf1-9c90-2fd2658b02ab", 00:12:05.045 "is_configured": true, 00:12:05.045 "data_offset": 2048, 00:12:05.045 "data_size": 63488 00:12:05.045 } 00:12:05.045 ] 00:12:05.045 }' 00:12:05.045 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.045 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.305 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.305 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.306 [2024-10-01 14:36:56.863797] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.306 "name": "Existed_Raid", 00:12:05.306 "aliases": [ 00:12:05.306 "a05b660a-a932-4bbb-b8d4-5a881b86bd40" 00:12:05.306 ], 00:12:05.306 "product_name": "Raid Volume", 00:12:05.306 "block_size": 512, 00:12:05.306 "num_blocks": 126976, 00:12:05.306 "uuid": "a05b660a-a932-4bbb-b8d4-5a881b86bd40", 00:12:05.306 "assigned_rate_limits": { 00:12:05.306 "rw_ios_per_sec": 0, 00:12:05.306 "rw_mbytes_per_sec": 0, 00:12:05.306 "r_mbytes_per_sec": 0, 00:12:05.306 "w_mbytes_per_sec": 0 00:12:05.306 }, 00:12:05.306 "claimed": false, 00:12:05.306 "zoned": false, 00:12:05.306 "supported_io_types": { 00:12:05.306 "read": true, 00:12:05.306 "write": true, 00:12:05.306 "unmap": false, 00:12:05.306 "flush": false, 00:12:05.306 "reset": true, 00:12:05.306 "nvme_admin": false, 00:12:05.306 "nvme_io": false, 00:12:05.306 "nvme_io_md": false, 00:12:05.306 "write_zeroes": true, 00:12:05.306 "zcopy": false, 00:12:05.306 "get_zone_info": false, 00:12:05.306 "zone_management": false, 00:12:05.306 "zone_append": false, 00:12:05.306 "compare": false, 00:12:05.306 "compare_and_write": false, 00:12:05.306 "abort": false, 00:12:05.306 "seek_hole": false, 00:12:05.306 "seek_data": false, 00:12:05.306 "copy": false, 00:12:05.306 "nvme_iov_md": false 00:12:05.306 }, 00:12:05.306 "driver_specific": { 00:12:05.306 "raid": { 00:12:05.306 "uuid": "a05b660a-a932-4bbb-b8d4-5a881b86bd40", 00:12:05.306 "strip_size_kb": 64, 00:12:05.306 "state": "online", 00:12:05.306 "raid_level": "raid5f", 00:12:05.306 "superblock": true, 00:12:05.306 "num_base_bdevs": 3, 00:12:05.306 "num_base_bdevs_discovered": 3, 00:12:05.306 "num_base_bdevs_operational": 3, 00:12:05.306 "base_bdevs_list": [ 00:12:05.306 { 00:12:05.306 "name": "BaseBdev1", 00:12:05.306 "uuid": "f25b65ae-2b45-4d86-a5a6-01f7623ce3c5", 00:12:05.306 "is_configured": true, 00:12:05.306 "data_offset": 2048, 00:12:05.306 "data_size": 63488 00:12:05.306 }, 00:12:05.306 { 00:12:05.306 "name": "BaseBdev2", 00:12:05.306 "uuid": "adc3edcb-1790-43a8-a0b7-809166cdb75e", 00:12:05.306 "is_configured": true, 00:12:05.306 "data_offset": 2048, 00:12:05.306 "data_size": 63488 00:12:05.306 }, 00:12:05.306 { 00:12:05.306 "name": "BaseBdev3", 00:12:05.306 "uuid": "d53d212e-9844-4bf1-9c90-2fd2658b02ab", 00:12:05.306 "is_configured": true, 00:12:05.306 "data_offset": 2048, 00:12:05.306 "data_size": 63488 00:12:05.306 } 00:12:05.306 ] 00:12:05.306 } 00:12:05.306 } 00:12:05.306 }' 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:05.306 BaseBdev2 00:12:05.306 BaseBdev3' 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.306 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.566 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.566 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.566 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.566 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.566 14:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.566 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.566 14:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.566 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.566 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.566 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.566 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.566 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:05.566 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.566 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.566 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.566 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.566 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.566 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.566 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.567 [2024-10-01 14:36:57.091597] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.567 "name": "Existed_Raid", 00:12:05.567 "uuid": "a05b660a-a932-4bbb-b8d4-5a881b86bd40", 00:12:05.567 "strip_size_kb": 64, 00:12:05.567 "state": "online", 00:12:05.567 "raid_level": "raid5f", 00:12:05.567 "superblock": true, 00:12:05.567 "num_base_bdevs": 3, 00:12:05.567 "num_base_bdevs_discovered": 2, 00:12:05.567 "num_base_bdevs_operational": 2, 00:12:05.567 "base_bdevs_list": [ 00:12:05.567 { 00:12:05.567 "name": null, 00:12:05.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.567 "is_configured": false, 00:12:05.567 "data_offset": 0, 00:12:05.567 "data_size": 63488 00:12:05.567 }, 00:12:05.567 { 00:12:05.567 "name": "BaseBdev2", 00:12:05.567 "uuid": "adc3edcb-1790-43a8-a0b7-809166cdb75e", 00:12:05.567 "is_configured": true, 00:12:05.567 "data_offset": 2048, 00:12:05.567 "data_size": 63488 00:12:05.567 }, 00:12:05.567 { 00:12:05.567 "name": "BaseBdev3", 00:12:05.567 "uuid": "d53d212e-9844-4bf1-9c90-2fd2658b02ab", 00:12:05.567 "is_configured": true, 00:12:05.567 "data_offset": 2048, 00:12:05.567 "data_size": 63488 00:12:05.567 } 00:12:05.567 ] 00:12:05.567 }' 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.567 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.137 [2024-10-01 14:36:57.570267] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.137 [2024-10-01 14:36:57.570622] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.137 [2024-10-01 14:36:57.637686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.137 [2024-10-01 14:36:57.677789] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.137 [2024-10-01 14:36:57.677856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.137 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.398 BaseBdev2 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.398 [ 00:12:06.398 { 00:12:06.398 "name": "BaseBdev2", 00:12:06.398 "aliases": [ 00:12:06.398 "123ca75e-e632-490a-a537-79c82fc94ce0" 00:12:06.398 ], 00:12:06.398 "product_name": "Malloc disk", 00:12:06.398 "block_size": 512, 00:12:06.398 "num_blocks": 65536, 00:12:06.398 "uuid": "123ca75e-e632-490a-a537-79c82fc94ce0", 00:12:06.398 "assigned_rate_limits": { 00:12:06.398 "rw_ios_per_sec": 0, 00:12:06.398 "rw_mbytes_per_sec": 0, 00:12:06.398 "r_mbytes_per_sec": 0, 00:12:06.398 "w_mbytes_per_sec": 0 00:12:06.398 }, 00:12:06.398 "claimed": false, 00:12:06.398 "zoned": false, 00:12:06.398 "supported_io_types": { 00:12:06.398 "read": true, 00:12:06.398 "write": true, 00:12:06.398 "unmap": true, 00:12:06.398 "flush": true, 00:12:06.398 "reset": true, 00:12:06.398 "nvme_admin": false, 00:12:06.398 "nvme_io": false, 00:12:06.398 "nvme_io_md": false, 00:12:06.398 "write_zeroes": true, 00:12:06.398 "zcopy": true, 00:12:06.398 "get_zone_info": false, 00:12:06.398 "zone_management": false, 00:12:06.398 "zone_append": false, 00:12:06.398 "compare": false, 00:12:06.398 "compare_and_write": false, 00:12:06.398 "abort": true, 00:12:06.398 "seek_hole": false, 00:12:06.398 "seek_data": false, 00:12:06.398 "copy": true, 00:12:06.398 "nvme_iov_md": false 00:12:06.398 }, 00:12:06.398 "memory_domains": [ 00:12:06.398 { 00:12:06.398 "dma_device_id": "system", 00:12:06.398 "dma_device_type": 1 00:12:06.398 }, 00:12:06.398 { 00:12:06.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.398 "dma_device_type": 2 00:12:06.398 } 00:12:06.398 ], 00:12:06.398 "driver_specific": {} 00:12:06.398 } 00:12:06.398 ] 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.398 BaseBdev3 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.398 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.398 [ 00:12:06.398 { 00:12:06.398 "name": "BaseBdev3", 00:12:06.398 "aliases": [ 00:12:06.398 "847286d4-0947-4436-b562-44765032f976" 00:12:06.398 ], 00:12:06.398 "product_name": "Malloc disk", 00:12:06.398 "block_size": 512, 00:12:06.398 "num_blocks": 65536, 00:12:06.398 "uuid": "847286d4-0947-4436-b562-44765032f976", 00:12:06.398 "assigned_rate_limits": { 00:12:06.398 "rw_ios_per_sec": 0, 00:12:06.398 "rw_mbytes_per_sec": 0, 00:12:06.398 "r_mbytes_per_sec": 0, 00:12:06.398 "w_mbytes_per_sec": 0 00:12:06.398 }, 00:12:06.398 "claimed": false, 00:12:06.399 "zoned": false, 00:12:06.399 "supported_io_types": { 00:12:06.399 "read": true, 00:12:06.399 "write": true, 00:12:06.399 "unmap": true, 00:12:06.399 "flush": true, 00:12:06.399 "reset": true, 00:12:06.399 "nvme_admin": false, 00:12:06.399 "nvme_io": false, 00:12:06.399 "nvme_io_md": false, 00:12:06.399 "write_zeroes": true, 00:12:06.399 "zcopy": true, 00:12:06.399 "get_zone_info": false, 00:12:06.399 "zone_management": false, 00:12:06.399 "zone_append": false, 00:12:06.399 "compare": false, 00:12:06.399 "compare_and_write": false, 00:12:06.399 "abort": true, 00:12:06.399 "seek_hole": false, 00:12:06.399 "seek_data": false, 00:12:06.399 "copy": true, 00:12:06.399 "nvme_iov_md": false 00:12:06.399 }, 00:12:06.399 "memory_domains": [ 00:12:06.399 { 00:12:06.399 "dma_device_id": "system", 00:12:06.399 "dma_device_type": 1 00:12:06.399 }, 00:12:06.399 { 00:12:06.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.399 "dma_device_type": 2 00:12:06.399 } 00:12:06.399 ], 00:12:06.399 "driver_specific": {} 00:12:06.399 } 00:12:06.399 ] 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.399 [2024-10-01 14:36:57.916492] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:06.399 [2024-10-01 14:36:57.916734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:06.399 [2024-10-01 14:36:57.916837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.399 [2024-10-01 14:36:57.919148] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.399 "name": "Existed_Raid", 00:12:06.399 "uuid": "30ec6385-f8bd-4fcc-9d5e-b9d8e940390a", 00:12:06.399 "strip_size_kb": 64, 00:12:06.399 "state": "configuring", 00:12:06.399 "raid_level": "raid5f", 00:12:06.399 "superblock": true, 00:12:06.399 "num_base_bdevs": 3, 00:12:06.399 "num_base_bdevs_discovered": 2, 00:12:06.399 "num_base_bdevs_operational": 3, 00:12:06.399 "base_bdevs_list": [ 00:12:06.399 { 00:12:06.399 "name": "BaseBdev1", 00:12:06.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.399 "is_configured": false, 00:12:06.399 "data_offset": 0, 00:12:06.399 "data_size": 0 00:12:06.399 }, 00:12:06.399 { 00:12:06.399 "name": "BaseBdev2", 00:12:06.399 "uuid": "123ca75e-e632-490a-a537-79c82fc94ce0", 00:12:06.399 "is_configured": true, 00:12:06.399 "data_offset": 2048, 00:12:06.399 "data_size": 63488 00:12:06.399 }, 00:12:06.399 { 00:12:06.399 "name": "BaseBdev3", 00:12:06.399 "uuid": "847286d4-0947-4436-b562-44765032f976", 00:12:06.399 "is_configured": true, 00:12:06.399 "data_offset": 2048, 00:12:06.399 "data_size": 63488 00:12:06.399 } 00:12:06.399 ] 00:12:06.399 }' 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.399 14:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.660 [2024-10-01 14:36:58.260510] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.660 "name": "Existed_Raid", 00:12:06.660 "uuid": "30ec6385-f8bd-4fcc-9d5e-b9d8e940390a", 00:12:06.660 "strip_size_kb": 64, 00:12:06.660 "state": "configuring", 00:12:06.660 "raid_level": "raid5f", 00:12:06.660 "superblock": true, 00:12:06.660 "num_base_bdevs": 3, 00:12:06.660 "num_base_bdevs_discovered": 1, 00:12:06.660 "num_base_bdevs_operational": 3, 00:12:06.660 "base_bdevs_list": [ 00:12:06.660 { 00:12:06.660 "name": "BaseBdev1", 00:12:06.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.660 "is_configured": false, 00:12:06.660 "data_offset": 0, 00:12:06.660 "data_size": 0 00:12:06.660 }, 00:12:06.660 { 00:12:06.660 "name": null, 00:12:06.660 "uuid": "123ca75e-e632-490a-a537-79c82fc94ce0", 00:12:06.660 "is_configured": false, 00:12:06.660 "data_offset": 0, 00:12:06.660 "data_size": 63488 00:12:06.660 }, 00:12:06.660 { 00:12:06.660 "name": "BaseBdev3", 00:12:06.660 "uuid": "847286d4-0947-4436-b562-44765032f976", 00:12:06.660 "is_configured": true, 00:12:06.660 "data_offset": 2048, 00:12:06.660 "data_size": 63488 00:12:06.660 } 00:12:06.660 ] 00:12:06.660 }' 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.660 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.921 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.921 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.921 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.921 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.182 [2024-10-01 14:36:58.660918] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.182 BaseBdev1 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.182 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.182 [ 00:12:07.182 { 00:12:07.182 "name": "BaseBdev1", 00:12:07.182 "aliases": [ 00:12:07.182 "d3bf97f1-20b8-4a08-881f-c7dcdee3783d" 00:12:07.182 ], 00:12:07.182 "product_name": "Malloc disk", 00:12:07.182 "block_size": 512, 00:12:07.182 "num_blocks": 65536, 00:12:07.182 "uuid": "d3bf97f1-20b8-4a08-881f-c7dcdee3783d", 00:12:07.182 "assigned_rate_limits": { 00:12:07.182 "rw_ios_per_sec": 0, 00:12:07.182 "rw_mbytes_per_sec": 0, 00:12:07.182 "r_mbytes_per_sec": 0, 00:12:07.182 "w_mbytes_per_sec": 0 00:12:07.182 }, 00:12:07.182 "claimed": true, 00:12:07.182 "claim_type": "exclusive_write", 00:12:07.182 "zoned": false, 00:12:07.182 "supported_io_types": { 00:12:07.182 "read": true, 00:12:07.182 "write": true, 00:12:07.182 "unmap": true, 00:12:07.182 "flush": true, 00:12:07.182 "reset": true, 00:12:07.182 "nvme_admin": false, 00:12:07.182 "nvme_io": false, 00:12:07.182 "nvme_io_md": false, 00:12:07.182 "write_zeroes": true, 00:12:07.182 "zcopy": true, 00:12:07.182 "get_zone_info": false, 00:12:07.182 "zone_management": false, 00:12:07.182 "zone_append": false, 00:12:07.182 "compare": false, 00:12:07.182 "compare_and_write": false, 00:12:07.182 "abort": true, 00:12:07.182 "seek_hole": false, 00:12:07.182 "seek_data": false, 00:12:07.182 "copy": true, 00:12:07.182 "nvme_iov_md": false 00:12:07.182 }, 00:12:07.182 "memory_domains": [ 00:12:07.182 { 00:12:07.182 "dma_device_id": "system", 00:12:07.182 "dma_device_type": 1 00:12:07.183 }, 00:12:07.183 { 00:12:07.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.183 "dma_device_type": 2 00:12:07.183 } 00:12:07.183 ], 00:12:07.183 "driver_specific": {} 00:12:07.183 } 00:12:07.183 ] 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.183 "name": "Existed_Raid", 00:12:07.183 "uuid": "30ec6385-f8bd-4fcc-9d5e-b9d8e940390a", 00:12:07.183 "strip_size_kb": 64, 00:12:07.183 "state": "configuring", 00:12:07.183 "raid_level": "raid5f", 00:12:07.183 "superblock": true, 00:12:07.183 "num_base_bdevs": 3, 00:12:07.183 "num_base_bdevs_discovered": 2, 00:12:07.183 "num_base_bdevs_operational": 3, 00:12:07.183 "base_bdevs_list": [ 00:12:07.183 { 00:12:07.183 "name": "BaseBdev1", 00:12:07.183 "uuid": "d3bf97f1-20b8-4a08-881f-c7dcdee3783d", 00:12:07.183 "is_configured": true, 00:12:07.183 "data_offset": 2048, 00:12:07.183 "data_size": 63488 00:12:07.183 }, 00:12:07.183 { 00:12:07.183 "name": null, 00:12:07.183 "uuid": "123ca75e-e632-490a-a537-79c82fc94ce0", 00:12:07.183 "is_configured": false, 00:12:07.183 "data_offset": 0, 00:12:07.183 "data_size": 63488 00:12:07.183 }, 00:12:07.183 { 00:12:07.183 "name": "BaseBdev3", 00:12:07.183 "uuid": "847286d4-0947-4436-b562-44765032f976", 00:12:07.183 "is_configured": true, 00:12:07.183 "data_offset": 2048, 00:12:07.183 "data_size": 63488 00:12:07.183 } 00:12:07.183 ] 00:12:07.183 }' 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.183 14:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.444 [2024-10-01 14:36:59.065102] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.444 "name": "Existed_Raid", 00:12:07.444 "uuid": "30ec6385-f8bd-4fcc-9d5e-b9d8e940390a", 00:12:07.444 "strip_size_kb": 64, 00:12:07.444 "state": "configuring", 00:12:07.444 "raid_level": "raid5f", 00:12:07.444 "superblock": true, 00:12:07.444 "num_base_bdevs": 3, 00:12:07.444 "num_base_bdevs_discovered": 1, 00:12:07.444 "num_base_bdevs_operational": 3, 00:12:07.444 "base_bdevs_list": [ 00:12:07.444 { 00:12:07.444 "name": "BaseBdev1", 00:12:07.444 "uuid": "d3bf97f1-20b8-4a08-881f-c7dcdee3783d", 00:12:07.444 "is_configured": true, 00:12:07.444 "data_offset": 2048, 00:12:07.444 "data_size": 63488 00:12:07.444 }, 00:12:07.444 { 00:12:07.444 "name": null, 00:12:07.444 "uuid": "123ca75e-e632-490a-a537-79c82fc94ce0", 00:12:07.444 "is_configured": false, 00:12:07.444 "data_offset": 0, 00:12:07.444 "data_size": 63488 00:12:07.444 }, 00:12:07.444 { 00:12:07.444 "name": null, 00:12:07.444 "uuid": "847286d4-0947-4436-b562-44765032f976", 00:12:07.444 "is_configured": false, 00:12:07.444 "data_offset": 0, 00:12:07.444 "data_size": 63488 00:12:07.444 } 00:12:07.444 ] 00:12:07.444 }' 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.444 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.794 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.794 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:07.794 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.794 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.794 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.794 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:07.794 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:07.794 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.794 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.794 [2024-10-01 14:36:59.445152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.054 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.054 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:08.054 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.054 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.054 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:08.054 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.054 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.054 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.054 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.054 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.054 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.054 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.054 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.055 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.055 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.055 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.055 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.055 "name": "Existed_Raid", 00:12:08.055 "uuid": "30ec6385-f8bd-4fcc-9d5e-b9d8e940390a", 00:12:08.055 "strip_size_kb": 64, 00:12:08.055 "state": "configuring", 00:12:08.055 "raid_level": "raid5f", 00:12:08.055 "superblock": true, 00:12:08.055 "num_base_bdevs": 3, 00:12:08.055 "num_base_bdevs_discovered": 2, 00:12:08.055 "num_base_bdevs_operational": 3, 00:12:08.055 "base_bdevs_list": [ 00:12:08.055 { 00:12:08.055 "name": "BaseBdev1", 00:12:08.055 "uuid": "d3bf97f1-20b8-4a08-881f-c7dcdee3783d", 00:12:08.055 "is_configured": true, 00:12:08.055 "data_offset": 2048, 00:12:08.055 "data_size": 63488 00:12:08.055 }, 00:12:08.055 { 00:12:08.055 "name": null, 00:12:08.055 "uuid": "123ca75e-e632-490a-a537-79c82fc94ce0", 00:12:08.055 "is_configured": false, 00:12:08.055 "data_offset": 0, 00:12:08.055 "data_size": 63488 00:12:08.055 }, 00:12:08.055 { 00:12:08.055 "name": "BaseBdev3", 00:12:08.055 "uuid": "847286d4-0947-4436-b562-44765032f976", 00:12:08.055 "is_configured": true, 00:12:08.055 "data_offset": 2048, 00:12:08.055 "data_size": 63488 00:12:08.055 } 00:12:08.055 ] 00:12:08.055 }' 00:12:08.055 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.055 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.315 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.316 [2024-10-01 14:36:59.813311] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.316 "name": "Existed_Raid", 00:12:08.316 "uuid": "30ec6385-f8bd-4fcc-9d5e-b9d8e940390a", 00:12:08.316 "strip_size_kb": 64, 00:12:08.316 "state": "configuring", 00:12:08.316 "raid_level": "raid5f", 00:12:08.316 "superblock": true, 00:12:08.316 "num_base_bdevs": 3, 00:12:08.316 "num_base_bdevs_discovered": 1, 00:12:08.316 "num_base_bdevs_operational": 3, 00:12:08.316 "base_bdevs_list": [ 00:12:08.316 { 00:12:08.316 "name": null, 00:12:08.316 "uuid": "d3bf97f1-20b8-4a08-881f-c7dcdee3783d", 00:12:08.316 "is_configured": false, 00:12:08.316 "data_offset": 0, 00:12:08.316 "data_size": 63488 00:12:08.316 }, 00:12:08.316 { 00:12:08.316 "name": null, 00:12:08.316 "uuid": "123ca75e-e632-490a-a537-79c82fc94ce0", 00:12:08.316 "is_configured": false, 00:12:08.316 "data_offset": 0, 00:12:08.316 "data_size": 63488 00:12:08.316 }, 00:12:08.316 { 00:12:08.316 "name": "BaseBdev3", 00:12:08.316 "uuid": "847286d4-0947-4436-b562-44765032f976", 00:12:08.316 "is_configured": true, 00:12:08.316 "data_offset": 2048, 00:12:08.316 "data_size": 63488 00:12:08.316 } 00:12:08.316 ] 00:12:08.316 }' 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.316 14:36:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.578 [2024-10-01 14:37:00.252456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.578 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.839 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.839 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.839 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.839 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.839 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.839 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.839 "name": "Existed_Raid", 00:12:08.839 "uuid": "30ec6385-f8bd-4fcc-9d5e-b9d8e940390a", 00:12:08.839 "strip_size_kb": 64, 00:12:08.839 "state": "configuring", 00:12:08.839 "raid_level": "raid5f", 00:12:08.839 "superblock": true, 00:12:08.839 "num_base_bdevs": 3, 00:12:08.839 "num_base_bdevs_discovered": 2, 00:12:08.839 "num_base_bdevs_operational": 3, 00:12:08.839 "base_bdevs_list": [ 00:12:08.839 { 00:12:08.839 "name": null, 00:12:08.839 "uuid": "d3bf97f1-20b8-4a08-881f-c7dcdee3783d", 00:12:08.839 "is_configured": false, 00:12:08.839 "data_offset": 0, 00:12:08.839 "data_size": 63488 00:12:08.839 }, 00:12:08.839 { 00:12:08.839 "name": "BaseBdev2", 00:12:08.839 "uuid": "123ca75e-e632-490a-a537-79c82fc94ce0", 00:12:08.839 "is_configured": true, 00:12:08.839 "data_offset": 2048, 00:12:08.839 "data_size": 63488 00:12:08.839 }, 00:12:08.839 { 00:12:08.839 "name": "BaseBdev3", 00:12:08.839 "uuid": "847286d4-0947-4436-b562-44765032f976", 00:12:08.839 "is_configured": true, 00:12:08.839 "data_offset": 2048, 00:12:08.839 "data_size": 63488 00:12:08.839 } 00:12:08.839 ] 00:12:08.839 }' 00:12:08.839 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.839 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d3bf97f1-20b8-4a08-881f-c7dcdee3783d 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.101 [2024-10-01 14:37:00.671523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:09.101 [2024-10-01 14:37:00.672002] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:09.101 [2024-10-01 14:37:00.672034] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:09.101 [2024-10-01 14:37:00.672337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:09.101 NewBaseBdev 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.101 [2024-10-01 14:37:00.676167] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:09.101 [2024-10-01 14:37:00.676193] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:09.101 [2024-10-01 14:37:00.676369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.101 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.101 [ 00:12:09.101 { 00:12:09.101 "name": "NewBaseBdev", 00:12:09.101 "aliases": [ 00:12:09.101 "d3bf97f1-20b8-4a08-881f-c7dcdee3783d" 00:12:09.101 ], 00:12:09.101 "product_name": "Malloc disk", 00:12:09.101 "block_size": 512, 00:12:09.101 "num_blocks": 65536, 00:12:09.101 "uuid": "d3bf97f1-20b8-4a08-881f-c7dcdee3783d", 00:12:09.101 "assigned_rate_limits": { 00:12:09.101 "rw_ios_per_sec": 0, 00:12:09.101 "rw_mbytes_per_sec": 0, 00:12:09.101 "r_mbytes_per_sec": 0, 00:12:09.101 "w_mbytes_per_sec": 0 00:12:09.101 }, 00:12:09.101 "claimed": true, 00:12:09.101 "claim_type": "exclusive_write", 00:12:09.101 "zoned": false, 00:12:09.101 "supported_io_types": { 00:12:09.101 "read": true, 00:12:09.101 "write": true, 00:12:09.101 "unmap": true, 00:12:09.101 "flush": true, 00:12:09.101 "reset": true, 00:12:09.101 "nvme_admin": false, 00:12:09.101 "nvme_io": false, 00:12:09.101 "nvme_io_md": false, 00:12:09.101 "write_zeroes": true, 00:12:09.101 "zcopy": true, 00:12:09.101 "get_zone_info": false, 00:12:09.101 "zone_management": false, 00:12:09.101 "zone_append": false, 00:12:09.101 "compare": false, 00:12:09.101 "compare_and_write": false, 00:12:09.101 "abort": true, 00:12:09.101 "seek_hole": false, 00:12:09.101 "seek_data": false, 00:12:09.101 "copy": true, 00:12:09.102 "nvme_iov_md": false 00:12:09.102 }, 00:12:09.102 "memory_domains": [ 00:12:09.102 { 00:12:09.102 "dma_device_id": "system", 00:12:09.102 "dma_device_type": 1 00:12:09.102 }, 00:12:09.102 { 00:12:09.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.102 "dma_device_type": 2 00:12:09.102 } 00:12:09.102 ], 00:12:09.102 "driver_specific": {} 00:12:09.102 } 00:12:09.102 ] 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.102 "name": "Existed_Raid", 00:12:09.102 "uuid": "30ec6385-f8bd-4fcc-9d5e-b9d8e940390a", 00:12:09.102 "strip_size_kb": 64, 00:12:09.102 "state": "online", 00:12:09.102 "raid_level": "raid5f", 00:12:09.102 "superblock": true, 00:12:09.102 "num_base_bdevs": 3, 00:12:09.102 "num_base_bdevs_discovered": 3, 00:12:09.102 "num_base_bdevs_operational": 3, 00:12:09.102 "base_bdevs_list": [ 00:12:09.102 { 00:12:09.102 "name": "NewBaseBdev", 00:12:09.102 "uuid": "d3bf97f1-20b8-4a08-881f-c7dcdee3783d", 00:12:09.102 "is_configured": true, 00:12:09.102 "data_offset": 2048, 00:12:09.102 "data_size": 63488 00:12:09.102 }, 00:12:09.102 { 00:12:09.102 "name": "BaseBdev2", 00:12:09.102 "uuid": "123ca75e-e632-490a-a537-79c82fc94ce0", 00:12:09.102 "is_configured": true, 00:12:09.102 "data_offset": 2048, 00:12:09.102 "data_size": 63488 00:12:09.102 }, 00:12:09.102 { 00:12:09.102 "name": "BaseBdev3", 00:12:09.102 "uuid": "847286d4-0947-4436-b562-44765032f976", 00:12:09.102 "is_configured": true, 00:12:09.102 "data_offset": 2048, 00:12:09.102 "data_size": 63488 00:12:09.102 } 00:12:09.102 ] 00:12:09.102 }' 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.102 14:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.363 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:09.363 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:09.363 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:09.363 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:09.363 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:09.363 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:09.363 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:09.363 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:09.363 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.363 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.363 [2024-10-01 14:37:01.037315] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:09.625 "name": "Existed_Raid", 00:12:09.625 "aliases": [ 00:12:09.625 "30ec6385-f8bd-4fcc-9d5e-b9d8e940390a" 00:12:09.625 ], 00:12:09.625 "product_name": "Raid Volume", 00:12:09.625 "block_size": 512, 00:12:09.625 "num_blocks": 126976, 00:12:09.625 "uuid": "30ec6385-f8bd-4fcc-9d5e-b9d8e940390a", 00:12:09.625 "assigned_rate_limits": { 00:12:09.625 "rw_ios_per_sec": 0, 00:12:09.625 "rw_mbytes_per_sec": 0, 00:12:09.625 "r_mbytes_per_sec": 0, 00:12:09.625 "w_mbytes_per_sec": 0 00:12:09.625 }, 00:12:09.625 "claimed": false, 00:12:09.625 "zoned": false, 00:12:09.625 "supported_io_types": { 00:12:09.625 "read": true, 00:12:09.625 "write": true, 00:12:09.625 "unmap": false, 00:12:09.625 "flush": false, 00:12:09.625 "reset": true, 00:12:09.625 "nvme_admin": false, 00:12:09.625 "nvme_io": false, 00:12:09.625 "nvme_io_md": false, 00:12:09.625 "write_zeroes": true, 00:12:09.625 "zcopy": false, 00:12:09.625 "get_zone_info": false, 00:12:09.625 "zone_management": false, 00:12:09.625 "zone_append": false, 00:12:09.625 "compare": false, 00:12:09.625 "compare_and_write": false, 00:12:09.625 "abort": false, 00:12:09.625 "seek_hole": false, 00:12:09.625 "seek_data": false, 00:12:09.625 "copy": false, 00:12:09.625 "nvme_iov_md": false 00:12:09.625 }, 00:12:09.625 "driver_specific": { 00:12:09.625 "raid": { 00:12:09.625 "uuid": "30ec6385-f8bd-4fcc-9d5e-b9d8e940390a", 00:12:09.625 "strip_size_kb": 64, 00:12:09.625 "state": "online", 00:12:09.625 "raid_level": "raid5f", 00:12:09.625 "superblock": true, 00:12:09.625 "num_base_bdevs": 3, 00:12:09.625 "num_base_bdevs_discovered": 3, 00:12:09.625 "num_base_bdevs_operational": 3, 00:12:09.625 "base_bdevs_list": [ 00:12:09.625 { 00:12:09.625 "name": "NewBaseBdev", 00:12:09.625 "uuid": "d3bf97f1-20b8-4a08-881f-c7dcdee3783d", 00:12:09.625 "is_configured": true, 00:12:09.625 "data_offset": 2048, 00:12:09.625 "data_size": 63488 00:12:09.625 }, 00:12:09.625 { 00:12:09.625 "name": "BaseBdev2", 00:12:09.625 "uuid": "123ca75e-e632-490a-a537-79c82fc94ce0", 00:12:09.625 "is_configured": true, 00:12:09.625 "data_offset": 2048, 00:12:09.625 "data_size": 63488 00:12:09.625 }, 00:12:09.625 { 00:12:09.625 "name": "BaseBdev3", 00:12:09.625 "uuid": "847286d4-0947-4436-b562-44765032f976", 00:12:09.625 "is_configured": true, 00:12:09.625 "data_offset": 2048, 00:12:09.625 "data_size": 63488 00:12:09.625 } 00:12:09.625 ] 00:12:09.625 } 00:12:09.625 } 00:12:09.625 }' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:09.625 BaseBdev2 00:12:09.625 BaseBdev3' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.625 [2024-10-01 14:37:01.237098] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.625 [2024-10-01 14:37:01.237138] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:09.625 [2024-10-01 14:37:01.237239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.625 [2024-10-01 14:37:01.237572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.625 [2024-10-01 14:37:01.237587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78535 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78535 ']' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 78535 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78535 00:12:09.625 killing process with pid 78535 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78535' 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 78535 00:12:09.625 [2024-10-01 14:37:01.270591] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.625 14:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 78535 00:12:09.887 [2024-10-01 14:37:01.485615] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:10.829 ************************************ 00:12:10.829 END TEST raid5f_state_function_test_sb 00:12:10.829 ************************************ 00:12:10.829 14:37:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:10.829 00:12:10.829 real 0m8.353s 00:12:10.829 user 0m12.932s 00:12:10.829 sys 0m1.485s 00:12:10.829 14:37:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.829 14:37:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.829 14:37:02 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:12:10.829 14:37:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:10.829 14:37:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.829 14:37:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:10.829 ************************************ 00:12:10.829 START TEST raid5f_superblock_test 00:12:10.829 ************************************ 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:10.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79128 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79128 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79128 ']' 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:10.829 14:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.191 [2024-10-01 14:37:02.578478] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:12:11.191 [2024-10-01 14:37:02.578645] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79128 ] 00:12:11.191 [2024-10-01 14:37:02.729635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.452 [2024-10-01 14:37:02.958253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.452 [2024-10-01 14:37:03.105644] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.452 [2024-10-01 14:37:03.105693] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.025 malloc1 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.025 [2024-10-01 14:37:03.496268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:12.025 [2024-10-01 14:37:03.496343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.025 [2024-10-01 14:37:03.496367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:12.025 [2024-10-01 14:37:03.496380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.025 [2024-10-01 14:37:03.498791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.025 [2024-10-01 14:37:03.498979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:12.025 pt1 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.025 malloc2 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.025 [2024-10-01 14:37:03.562550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:12.025 [2024-10-01 14:37:03.562630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.025 [2024-10-01 14:37:03.562659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:12.025 [2024-10-01 14:37:03.562669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.025 [2024-10-01 14:37:03.565095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.025 [2024-10-01 14:37:03.565144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:12.025 pt2 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.025 malloc3 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.025 [2024-10-01 14:37:03.611030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:12.025 [2024-10-01 14:37:03.611103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.025 [2024-10-01 14:37:03.611127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:12.025 [2024-10-01 14:37:03.611136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.025 [2024-10-01 14:37:03.613521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.025 [2024-10-01 14:37:03.613567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:12.025 pt3 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.025 [2024-10-01 14:37:03.623115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:12.025 [2024-10-01 14:37:03.625185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:12.025 [2024-10-01 14:37:03.625422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:12.025 [2024-10-01 14:37:03.625612] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:12.025 [2024-10-01 14:37:03.625627] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:12.025 [2024-10-01 14:37:03.625978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:12.025 [2024-10-01 14:37:03.629887] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:12.025 [2024-10-01 14:37:03.629911] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:12.025 [2024-10-01 14:37:03.630127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.025 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.026 "name": "raid_bdev1", 00:12:12.026 "uuid": "08e720cd-549c-469c-8eaf-7416f2847685", 00:12:12.026 "strip_size_kb": 64, 00:12:12.026 "state": "online", 00:12:12.026 "raid_level": "raid5f", 00:12:12.026 "superblock": true, 00:12:12.026 "num_base_bdevs": 3, 00:12:12.026 "num_base_bdevs_discovered": 3, 00:12:12.026 "num_base_bdevs_operational": 3, 00:12:12.026 "base_bdevs_list": [ 00:12:12.026 { 00:12:12.026 "name": "pt1", 00:12:12.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:12.026 "is_configured": true, 00:12:12.026 "data_offset": 2048, 00:12:12.026 "data_size": 63488 00:12:12.026 }, 00:12:12.026 { 00:12:12.026 "name": "pt2", 00:12:12.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.026 "is_configured": true, 00:12:12.026 "data_offset": 2048, 00:12:12.026 "data_size": 63488 00:12:12.026 }, 00:12:12.026 { 00:12:12.026 "name": "pt3", 00:12:12.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:12.026 "is_configured": true, 00:12:12.026 "data_offset": 2048, 00:12:12.026 "data_size": 63488 00:12:12.026 } 00:12:12.026 ] 00:12:12.026 }' 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.026 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.287 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:12.287 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:12.287 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.287 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.287 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.287 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.287 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.287 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:12.287 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.287 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.287 [2024-10-01 14:37:03.950821] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.287 14:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.546 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.546 "name": "raid_bdev1", 00:12:12.546 "aliases": [ 00:12:12.546 "08e720cd-549c-469c-8eaf-7416f2847685" 00:12:12.546 ], 00:12:12.546 "product_name": "Raid Volume", 00:12:12.546 "block_size": 512, 00:12:12.546 "num_blocks": 126976, 00:12:12.546 "uuid": "08e720cd-549c-469c-8eaf-7416f2847685", 00:12:12.546 "assigned_rate_limits": { 00:12:12.546 "rw_ios_per_sec": 0, 00:12:12.546 "rw_mbytes_per_sec": 0, 00:12:12.546 "r_mbytes_per_sec": 0, 00:12:12.546 "w_mbytes_per_sec": 0 00:12:12.546 }, 00:12:12.546 "claimed": false, 00:12:12.546 "zoned": false, 00:12:12.546 "supported_io_types": { 00:12:12.546 "read": true, 00:12:12.546 "write": true, 00:12:12.546 "unmap": false, 00:12:12.546 "flush": false, 00:12:12.546 "reset": true, 00:12:12.546 "nvme_admin": false, 00:12:12.546 "nvme_io": false, 00:12:12.546 "nvme_io_md": false, 00:12:12.546 "write_zeroes": true, 00:12:12.546 "zcopy": false, 00:12:12.546 "get_zone_info": false, 00:12:12.546 "zone_management": false, 00:12:12.546 "zone_append": false, 00:12:12.546 "compare": false, 00:12:12.546 "compare_and_write": false, 00:12:12.546 "abort": false, 00:12:12.546 "seek_hole": false, 00:12:12.546 "seek_data": false, 00:12:12.546 "copy": false, 00:12:12.546 "nvme_iov_md": false 00:12:12.546 }, 00:12:12.546 "driver_specific": { 00:12:12.546 "raid": { 00:12:12.546 "uuid": "08e720cd-549c-469c-8eaf-7416f2847685", 00:12:12.546 "strip_size_kb": 64, 00:12:12.546 "state": "online", 00:12:12.547 "raid_level": "raid5f", 00:12:12.547 "superblock": true, 00:12:12.547 "num_base_bdevs": 3, 00:12:12.547 "num_base_bdevs_discovered": 3, 00:12:12.547 "num_base_bdevs_operational": 3, 00:12:12.547 "base_bdevs_list": [ 00:12:12.547 { 00:12:12.547 "name": "pt1", 00:12:12.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:12.547 "is_configured": true, 00:12:12.547 "data_offset": 2048, 00:12:12.547 "data_size": 63488 00:12:12.547 }, 00:12:12.547 { 00:12:12.547 "name": "pt2", 00:12:12.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.547 "is_configured": true, 00:12:12.547 "data_offset": 2048, 00:12:12.547 "data_size": 63488 00:12:12.547 }, 00:12:12.547 { 00:12:12.547 "name": "pt3", 00:12:12.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:12.547 "is_configured": true, 00:12:12.547 "data_offset": 2048, 00:12:12.547 "data_size": 63488 00:12:12.547 } 00:12:12.547 ] 00:12:12.547 } 00:12:12.547 } 00:12:12.547 }' 00:12:12.547 14:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:12.547 pt2 00:12:12.547 pt3' 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:12.547 [2024-10-01 14:37:04.158839] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=08e720cd-549c-469c-8eaf-7416f2847685 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 08e720cd-549c-469c-8eaf-7416f2847685 ']' 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.547 [2024-10-01 14:37:04.194641] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.547 [2024-10-01 14:37:04.194667] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.547 [2024-10-01 14:37:04.194747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.547 [2024-10-01 14:37:04.194823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.547 [2024-10-01 14:37:04.194833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.547 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.806 [2024-10-01 14:37:04.302715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:12.806 [2024-10-01 14:37:04.304637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:12.806 [2024-10-01 14:37:04.304689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:12.806 [2024-10-01 14:37:04.304752] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:12.806 [2024-10-01 14:37:04.304802] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:12.806 [2024-10-01 14:37:04.304822] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:12.806 [2024-10-01 14:37:04.304840] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.806 [2024-10-01 14:37:04.304852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:12.806 request: 00:12:12.806 { 00:12:12.806 "name": "raid_bdev1", 00:12:12.806 "raid_level": "raid5f", 00:12:12.806 "base_bdevs": [ 00:12:12.806 "malloc1", 00:12:12.806 "malloc2", 00:12:12.806 "malloc3" 00:12:12.806 ], 00:12:12.806 "strip_size_kb": 64, 00:12:12.806 "superblock": false, 00:12:12.806 "method": "bdev_raid_create", 00:12:12.806 "req_id": 1 00:12:12.806 } 00:12:12.806 Got JSON-RPC error response 00:12:12.806 response: 00:12:12.806 { 00:12:12.806 "code": -17, 00:12:12.806 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:12.806 } 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.806 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.806 [2024-10-01 14:37:04.346683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:12.806 [2024-10-01 14:37:04.346828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.807 [2024-10-01 14:37:04.346868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:12.807 [2024-10-01 14:37:04.347228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.807 [2024-10-01 14:37:04.349474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.807 [2024-10-01 14:37:04.349583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:12.807 [2024-10-01 14:37:04.349746] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:12.807 [2024-10-01 14:37:04.349817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:12.807 pt1 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.807 "name": "raid_bdev1", 00:12:12.807 "uuid": "08e720cd-549c-469c-8eaf-7416f2847685", 00:12:12.807 "strip_size_kb": 64, 00:12:12.807 "state": "configuring", 00:12:12.807 "raid_level": "raid5f", 00:12:12.807 "superblock": true, 00:12:12.807 "num_base_bdevs": 3, 00:12:12.807 "num_base_bdevs_discovered": 1, 00:12:12.807 "num_base_bdevs_operational": 3, 00:12:12.807 "base_bdevs_list": [ 00:12:12.807 { 00:12:12.807 "name": "pt1", 00:12:12.807 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:12.807 "is_configured": true, 00:12:12.807 "data_offset": 2048, 00:12:12.807 "data_size": 63488 00:12:12.807 }, 00:12:12.807 { 00:12:12.807 "name": null, 00:12:12.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.807 "is_configured": false, 00:12:12.807 "data_offset": 2048, 00:12:12.807 "data_size": 63488 00:12:12.807 }, 00:12:12.807 { 00:12:12.807 "name": null, 00:12:12.807 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:12.807 "is_configured": false, 00:12:12.807 "data_offset": 2048, 00:12:12.807 "data_size": 63488 00:12:12.807 } 00:12:12.807 ] 00:12:12.807 }' 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.807 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.066 [2024-10-01 14:37:04.694789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:13.066 [2024-10-01 14:37:04.694849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.066 [2024-10-01 14:37:04.694870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:13.066 [2024-10-01 14:37:04.694879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.066 [2024-10-01 14:37:04.695278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.066 [2024-10-01 14:37:04.695291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:13.066 [2024-10-01 14:37:04.695364] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:13.066 [2024-10-01 14:37:04.695384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:13.066 pt2 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.066 [2024-10-01 14:37:04.702816] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.066 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.066 "name": "raid_bdev1", 00:12:13.066 "uuid": "08e720cd-549c-469c-8eaf-7416f2847685", 00:12:13.066 "strip_size_kb": 64, 00:12:13.066 "state": "configuring", 00:12:13.066 "raid_level": "raid5f", 00:12:13.066 "superblock": true, 00:12:13.066 "num_base_bdevs": 3, 00:12:13.066 "num_base_bdevs_discovered": 1, 00:12:13.067 "num_base_bdevs_operational": 3, 00:12:13.067 "base_bdevs_list": [ 00:12:13.067 { 00:12:13.067 "name": "pt1", 00:12:13.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:13.067 "is_configured": true, 00:12:13.067 "data_offset": 2048, 00:12:13.067 "data_size": 63488 00:12:13.067 }, 00:12:13.067 { 00:12:13.067 "name": null, 00:12:13.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.067 "is_configured": false, 00:12:13.067 "data_offset": 0, 00:12:13.067 "data_size": 63488 00:12:13.067 }, 00:12:13.067 { 00:12:13.067 "name": null, 00:12:13.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.067 "is_configured": false, 00:12:13.067 "data_offset": 2048, 00:12:13.067 "data_size": 63488 00:12:13.067 } 00:12:13.067 ] 00:12:13.067 }' 00:12:13.067 14:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.067 14:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.638 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:13.638 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:13.638 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:13.638 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.638 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.638 [2024-10-01 14:37:05.078867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:13.638 [2024-10-01 14:37:05.078931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.638 [2024-10-01 14:37:05.078946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:13.638 [2024-10-01 14:37:05.078956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.638 [2024-10-01 14:37:05.079370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.638 [2024-10-01 14:37:05.079386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:13.638 [2024-10-01 14:37:05.079454] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:13.638 [2024-10-01 14:37:05.079476] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:13.638 pt2 00:12:13.638 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.638 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:13.638 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:13.638 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:13.638 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.638 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.638 [2024-10-01 14:37:05.086879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:13.638 [2024-10-01 14:37:05.086927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.638 [2024-10-01 14:37:05.086943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:13.639 [2024-10-01 14:37:05.086954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.639 [2024-10-01 14:37:05.087327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.639 [2024-10-01 14:37:05.087345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:13.639 [2024-10-01 14:37:05.087408] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:13.639 [2024-10-01 14:37:05.087428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:13.639 [2024-10-01 14:37:05.087548] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:13.639 [2024-10-01 14:37:05.087559] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:13.639 [2024-10-01 14:37:05.087820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:13.639 [2024-10-01 14:37:05.091469] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:13.639 [2024-10-01 14:37:05.091555] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:13.639 [2024-10-01 14:37:05.091823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.639 pt3 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.639 "name": "raid_bdev1", 00:12:13.639 "uuid": "08e720cd-549c-469c-8eaf-7416f2847685", 00:12:13.639 "strip_size_kb": 64, 00:12:13.639 "state": "online", 00:12:13.639 "raid_level": "raid5f", 00:12:13.639 "superblock": true, 00:12:13.639 "num_base_bdevs": 3, 00:12:13.639 "num_base_bdevs_discovered": 3, 00:12:13.639 "num_base_bdevs_operational": 3, 00:12:13.639 "base_bdevs_list": [ 00:12:13.639 { 00:12:13.639 "name": "pt1", 00:12:13.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:13.639 "is_configured": true, 00:12:13.639 "data_offset": 2048, 00:12:13.639 "data_size": 63488 00:12:13.639 }, 00:12:13.639 { 00:12:13.639 "name": "pt2", 00:12:13.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.639 "is_configured": true, 00:12:13.639 "data_offset": 2048, 00:12:13.639 "data_size": 63488 00:12:13.639 }, 00:12:13.639 { 00:12:13.639 "name": "pt3", 00:12:13.639 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.639 "is_configured": true, 00:12:13.639 "data_offset": 2048, 00:12:13.639 "data_size": 63488 00:12:13.639 } 00:12:13.639 ] 00:12:13.639 }' 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.639 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.899 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:13.899 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:13.899 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:13.899 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:13.899 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:13.899 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:13.899 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:13.899 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:13.899 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.899 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.899 [2024-10-01 14:37:05.412270] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:13.899 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.899 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:13.899 "name": "raid_bdev1", 00:12:13.899 "aliases": [ 00:12:13.899 "08e720cd-549c-469c-8eaf-7416f2847685" 00:12:13.899 ], 00:12:13.899 "product_name": "Raid Volume", 00:12:13.899 "block_size": 512, 00:12:13.899 "num_blocks": 126976, 00:12:13.899 "uuid": "08e720cd-549c-469c-8eaf-7416f2847685", 00:12:13.899 "assigned_rate_limits": { 00:12:13.899 "rw_ios_per_sec": 0, 00:12:13.899 "rw_mbytes_per_sec": 0, 00:12:13.899 "r_mbytes_per_sec": 0, 00:12:13.899 "w_mbytes_per_sec": 0 00:12:13.899 }, 00:12:13.899 "claimed": false, 00:12:13.899 "zoned": false, 00:12:13.899 "supported_io_types": { 00:12:13.899 "read": true, 00:12:13.899 "write": true, 00:12:13.899 "unmap": false, 00:12:13.899 "flush": false, 00:12:13.899 "reset": true, 00:12:13.899 "nvme_admin": false, 00:12:13.899 "nvme_io": false, 00:12:13.899 "nvme_io_md": false, 00:12:13.899 "write_zeroes": true, 00:12:13.899 "zcopy": false, 00:12:13.899 "get_zone_info": false, 00:12:13.899 "zone_management": false, 00:12:13.899 "zone_append": false, 00:12:13.899 "compare": false, 00:12:13.899 "compare_and_write": false, 00:12:13.899 "abort": false, 00:12:13.899 "seek_hole": false, 00:12:13.899 "seek_data": false, 00:12:13.899 "copy": false, 00:12:13.899 "nvme_iov_md": false 00:12:13.899 }, 00:12:13.899 "driver_specific": { 00:12:13.899 "raid": { 00:12:13.899 "uuid": "08e720cd-549c-469c-8eaf-7416f2847685", 00:12:13.899 "strip_size_kb": 64, 00:12:13.899 "state": "online", 00:12:13.899 "raid_level": "raid5f", 00:12:13.900 "superblock": true, 00:12:13.900 "num_base_bdevs": 3, 00:12:13.900 "num_base_bdevs_discovered": 3, 00:12:13.900 "num_base_bdevs_operational": 3, 00:12:13.900 "base_bdevs_list": [ 00:12:13.900 { 00:12:13.900 "name": "pt1", 00:12:13.900 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:13.900 "is_configured": true, 00:12:13.900 "data_offset": 2048, 00:12:13.900 "data_size": 63488 00:12:13.900 }, 00:12:13.900 { 00:12:13.900 "name": "pt2", 00:12:13.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.900 "is_configured": true, 00:12:13.900 "data_offset": 2048, 00:12:13.900 "data_size": 63488 00:12:13.900 }, 00:12:13.900 { 00:12:13.900 "name": "pt3", 00:12:13.900 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.900 "is_configured": true, 00:12:13.900 "data_offset": 2048, 00:12:13.900 "data_size": 63488 00:12:13.900 } 00:12:13.900 ] 00:12:13.900 } 00:12:13.900 } 00:12:13.900 }' 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:13.900 pt2 00:12:13.900 pt3' 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.900 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.161 [2024-10-01 14:37:05.616331] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 08e720cd-549c-469c-8eaf-7416f2847685 '!=' 08e720cd-549c-469c-8eaf-7416f2847685 ']' 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.161 [2024-10-01 14:37:05.644202] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.161 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.161 "name": "raid_bdev1", 00:12:14.161 "uuid": "08e720cd-549c-469c-8eaf-7416f2847685", 00:12:14.161 "strip_size_kb": 64, 00:12:14.161 "state": "online", 00:12:14.161 "raid_level": "raid5f", 00:12:14.161 "superblock": true, 00:12:14.161 "num_base_bdevs": 3, 00:12:14.161 "num_base_bdevs_discovered": 2, 00:12:14.161 "num_base_bdevs_operational": 2, 00:12:14.161 "base_bdevs_list": [ 00:12:14.161 { 00:12:14.161 "name": null, 00:12:14.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.162 "is_configured": false, 00:12:14.162 "data_offset": 0, 00:12:14.162 "data_size": 63488 00:12:14.162 }, 00:12:14.162 { 00:12:14.162 "name": "pt2", 00:12:14.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.162 "is_configured": true, 00:12:14.162 "data_offset": 2048, 00:12:14.162 "data_size": 63488 00:12:14.162 }, 00:12:14.162 { 00:12:14.162 "name": "pt3", 00:12:14.162 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.162 "is_configured": true, 00:12:14.162 "data_offset": 2048, 00:12:14.162 "data_size": 63488 00:12:14.162 } 00:12:14.162 ] 00:12:14.162 }' 00:12:14.162 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.162 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.422 14:37:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:14.422 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.422 14:37:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.422 [2024-10-01 14:37:06.004193] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:14.422 [2024-10-01 14:37:06.004229] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.422 [2024-10-01 14:37:06.004309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.422 [2024-10-01 14:37:06.004374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.422 [2024-10-01 14:37:06.004390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.422 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.422 [2024-10-01 14:37:06.060157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:14.422 [2024-10-01 14:37:06.060208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.422 [2024-10-01 14:37:06.060225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:14.423 [2024-10-01 14:37:06.060236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.423 [2024-10-01 14:37:06.062445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.423 [2024-10-01 14:37:06.062584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:14.423 [2024-10-01 14:37:06.062672] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:14.423 [2024-10-01 14:37:06.062730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:14.423 pt2 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.423 "name": "raid_bdev1", 00:12:14.423 "uuid": "08e720cd-549c-469c-8eaf-7416f2847685", 00:12:14.423 "strip_size_kb": 64, 00:12:14.423 "state": "configuring", 00:12:14.423 "raid_level": "raid5f", 00:12:14.423 "superblock": true, 00:12:14.423 "num_base_bdevs": 3, 00:12:14.423 "num_base_bdevs_discovered": 1, 00:12:14.423 "num_base_bdevs_operational": 2, 00:12:14.423 "base_bdevs_list": [ 00:12:14.423 { 00:12:14.423 "name": null, 00:12:14.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.423 "is_configured": false, 00:12:14.423 "data_offset": 2048, 00:12:14.423 "data_size": 63488 00:12:14.423 }, 00:12:14.423 { 00:12:14.423 "name": "pt2", 00:12:14.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.423 "is_configured": true, 00:12:14.423 "data_offset": 2048, 00:12:14.423 "data_size": 63488 00:12:14.423 }, 00:12:14.423 { 00:12:14.423 "name": null, 00:12:14.423 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.423 "is_configured": false, 00:12:14.423 "data_offset": 2048, 00:12:14.423 "data_size": 63488 00:12:14.423 } 00:12:14.423 ] 00:12:14.423 }' 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.423 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.044 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:15.044 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:15.044 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:12:15.044 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:15.044 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.045 [2024-10-01 14:37:06.388260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:15.045 [2024-10-01 14:37:06.388319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.045 [2024-10-01 14:37:06.388340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:15.045 [2024-10-01 14:37:06.388351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.045 [2024-10-01 14:37:06.388787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.045 [2024-10-01 14:37:06.388804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:15.045 [2024-10-01 14:37:06.388882] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:15.045 [2024-10-01 14:37:06.388910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:15.045 [2024-10-01 14:37:06.389015] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:15.045 [2024-10-01 14:37:06.389026] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:15.045 [2024-10-01 14:37:06.389251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:15.045 [2024-10-01 14:37:06.392746] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:15.045 [2024-10-01 14:37:06.392765] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:15.045 [2024-10-01 14:37:06.393022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.045 pt3 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.045 "name": "raid_bdev1", 00:12:15.045 "uuid": "08e720cd-549c-469c-8eaf-7416f2847685", 00:12:15.045 "strip_size_kb": 64, 00:12:15.045 "state": "online", 00:12:15.045 "raid_level": "raid5f", 00:12:15.045 "superblock": true, 00:12:15.045 "num_base_bdevs": 3, 00:12:15.045 "num_base_bdevs_discovered": 2, 00:12:15.045 "num_base_bdevs_operational": 2, 00:12:15.045 "base_bdevs_list": [ 00:12:15.045 { 00:12:15.045 "name": null, 00:12:15.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.045 "is_configured": false, 00:12:15.045 "data_offset": 2048, 00:12:15.045 "data_size": 63488 00:12:15.045 }, 00:12:15.045 { 00:12:15.045 "name": "pt2", 00:12:15.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.045 "is_configured": true, 00:12:15.045 "data_offset": 2048, 00:12:15.045 "data_size": 63488 00:12:15.045 }, 00:12:15.045 { 00:12:15.045 "name": "pt3", 00:12:15.045 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.045 "is_configured": true, 00:12:15.045 "data_offset": 2048, 00:12:15.045 "data_size": 63488 00:12:15.045 } 00:12:15.045 ] 00:12:15.045 }' 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.045 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.334 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:15.334 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.334 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.334 [2024-10-01 14:37:06.705143] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.334 [2024-10-01 14:37:06.705188] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.334 [2024-10-01 14:37:06.705254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.334 [2024-10-01 14:37:06.705313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.334 [2024-10-01 14:37:06.705323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:15.334 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.334 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.334 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:15.334 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.334 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.334 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.334 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:15.334 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:15.334 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.335 [2024-10-01 14:37:06.757175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:15.335 [2024-10-01 14:37:06.757229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.335 [2024-10-01 14:37:06.757246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:15.335 [2024-10-01 14:37:06.757254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.335 [2024-10-01 14:37:06.759444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.335 [2024-10-01 14:37:06.759479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:15.335 [2024-10-01 14:37:06.759556] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:15.335 [2024-10-01 14:37:06.759595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:15.335 [2024-10-01 14:37:06.759737] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:15.335 [2024-10-01 14:37:06.759751] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.335 [2024-10-01 14:37:06.759768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:15.335 [2024-10-01 14:37:06.759821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:15.335 pt1 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.335 "name": "raid_bdev1", 00:12:15.335 "uuid": "08e720cd-549c-469c-8eaf-7416f2847685", 00:12:15.335 "strip_size_kb": 64, 00:12:15.335 "state": "configuring", 00:12:15.335 "raid_level": "raid5f", 00:12:15.335 "superblock": true, 00:12:15.335 "num_base_bdevs": 3, 00:12:15.335 "num_base_bdevs_discovered": 1, 00:12:15.335 "num_base_bdevs_operational": 2, 00:12:15.335 "base_bdevs_list": [ 00:12:15.335 { 00:12:15.335 "name": null, 00:12:15.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.335 "is_configured": false, 00:12:15.335 "data_offset": 2048, 00:12:15.335 "data_size": 63488 00:12:15.335 }, 00:12:15.335 { 00:12:15.335 "name": "pt2", 00:12:15.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.335 "is_configured": true, 00:12:15.335 "data_offset": 2048, 00:12:15.335 "data_size": 63488 00:12:15.335 }, 00:12:15.335 { 00:12:15.335 "name": null, 00:12:15.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.335 "is_configured": false, 00:12:15.335 "data_offset": 2048, 00:12:15.335 "data_size": 63488 00:12:15.335 } 00:12:15.335 ] 00:12:15.335 }' 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.335 14:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.596 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:15.596 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.596 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.596 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:15.596 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.596 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:15.596 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:15.596 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.597 [2024-10-01 14:37:07.133263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:15.597 [2024-10-01 14:37:07.133318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.597 [2024-10-01 14:37:07.133337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:15.597 [2024-10-01 14:37:07.133346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.597 [2024-10-01 14:37:07.133772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.597 [2024-10-01 14:37:07.133787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:15.597 [2024-10-01 14:37:07.133854] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:15.597 [2024-10-01 14:37:07.133873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:15.597 [2024-10-01 14:37:07.133978] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:15.597 [2024-10-01 14:37:07.133986] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:15.597 [2024-10-01 14:37:07.134237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:15.597 [2024-10-01 14:37:07.138120] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:15.597 [2024-10-01 14:37:07.138141] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:15.597 [2024-10-01 14:37:07.138361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.597 pt3 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.597 "name": "raid_bdev1", 00:12:15.597 "uuid": "08e720cd-549c-469c-8eaf-7416f2847685", 00:12:15.597 "strip_size_kb": 64, 00:12:15.597 "state": "online", 00:12:15.597 "raid_level": "raid5f", 00:12:15.597 "superblock": true, 00:12:15.597 "num_base_bdevs": 3, 00:12:15.597 "num_base_bdevs_discovered": 2, 00:12:15.597 "num_base_bdevs_operational": 2, 00:12:15.597 "base_bdevs_list": [ 00:12:15.597 { 00:12:15.597 "name": null, 00:12:15.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.597 "is_configured": false, 00:12:15.597 "data_offset": 2048, 00:12:15.597 "data_size": 63488 00:12:15.597 }, 00:12:15.597 { 00:12:15.597 "name": "pt2", 00:12:15.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.597 "is_configured": true, 00:12:15.597 "data_offset": 2048, 00:12:15.597 "data_size": 63488 00:12:15.597 }, 00:12:15.597 { 00:12:15.597 "name": "pt3", 00:12:15.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.597 "is_configured": true, 00:12:15.597 "data_offset": 2048, 00:12:15.597 "data_size": 63488 00:12:15.597 } 00:12:15.597 ] 00:12:15.597 }' 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.597 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.858 [2024-10-01 14:37:07.490627] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 08e720cd-549c-469c-8eaf-7416f2847685 '!=' 08e720cd-549c-469c-8eaf-7416f2847685 ']' 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79128 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79128 ']' 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79128 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:15.858 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79128 00:12:16.119 killing process with pid 79128 00:12:16.119 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:16.119 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:16.119 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79128' 00:12:16.119 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 79128 00:12:16.119 [2024-10-01 14:37:07.546294] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:16.119 [2024-10-01 14:37:07.546385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.119 14:37:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 79128 00:12:16.119 [2024-10-01 14:37:07.546447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.119 [2024-10-01 14:37:07.546459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:16.119 [2024-10-01 14:37:07.735192] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.061 ************************************ 00:12:17.061 END TEST raid5f_superblock_test 00:12:17.061 ************************************ 00:12:17.061 14:37:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:17.061 00:12:17.061 real 0m6.059s 00:12:17.061 user 0m9.341s 00:12:17.061 sys 0m1.036s 00:12:17.061 14:37:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.061 14:37:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.061 14:37:08 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:12:17.061 14:37:08 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:12:17.061 14:37:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:17.061 14:37:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.061 14:37:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.061 ************************************ 00:12:17.061 START TEST raid5f_rebuild_test 00:12:17.061 ************************************ 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.061 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:17.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=79555 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 79555 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 79555 ']' 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:17.062 14:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.062 [2024-10-01 14:37:08.690613] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:12:17.062 [2024-10-01 14:37:08.691297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79555 ] 00:12:17.062 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:17.062 Zero copy mechanism will not be used. 00:12:17.321 [2024-10-01 14:37:08.842682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.583 [2024-10-01 14:37:09.032050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.583 [2024-10-01 14:37:09.168780] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.583 [2024-10-01 14:37:09.168966] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.158 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:18.158 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.159 BaseBdev1_malloc 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.159 [2024-10-01 14:37:09.578270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:18.159 [2024-10-01 14:37:09.578437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.159 [2024-10-01 14:37:09.578464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:18.159 [2024-10-01 14:37:09.578477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.159 [2024-10-01 14:37:09.580620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.159 [2024-10-01 14:37:09.580657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:18.159 BaseBdev1 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.159 BaseBdev2_malloc 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.159 [2024-10-01 14:37:09.631625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:18.159 [2024-10-01 14:37:09.631694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.159 [2024-10-01 14:37:09.631727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:18.159 [2024-10-01 14:37:09.631739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.159 [2024-10-01 14:37:09.633896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.159 [2024-10-01 14:37:09.633934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:18.159 BaseBdev2 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.159 BaseBdev3_malloc 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.159 [2024-10-01 14:37:09.667611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:18.159 [2024-10-01 14:37:09.667660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.159 [2024-10-01 14:37:09.667679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:18.159 [2024-10-01 14:37:09.667689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.159 [2024-10-01 14:37:09.669778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.159 [2024-10-01 14:37:09.669810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:18.159 BaseBdev3 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.159 spare_malloc 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.159 spare_delay 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.159 [2024-10-01 14:37:09.711572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:18.159 [2024-10-01 14:37:09.711622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.159 [2024-10-01 14:37:09.711638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:18.159 [2024-10-01 14:37:09.711649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.159 [2024-10-01 14:37:09.713773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.159 [2024-10-01 14:37:09.713854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:18.159 spare 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.159 [2024-10-01 14:37:09.719646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.159 [2024-10-01 14:37:09.721474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.159 [2024-10-01 14:37:09.721538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.159 [2024-10-01 14:37:09.721614] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:18.159 [2024-10-01 14:37:09.721624] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:18.159 [2024-10-01 14:37:09.721919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:18.159 [2024-10-01 14:37:09.725661] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:18.159 [2024-10-01 14:37:09.725683] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:18.159 [2024-10-01 14:37:09.725867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.159 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.159 "name": "raid_bdev1", 00:12:18.159 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:18.159 "strip_size_kb": 64, 00:12:18.159 "state": "online", 00:12:18.159 "raid_level": "raid5f", 00:12:18.159 "superblock": false, 00:12:18.159 "num_base_bdevs": 3, 00:12:18.159 "num_base_bdevs_discovered": 3, 00:12:18.159 "num_base_bdevs_operational": 3, 00:12:18.159 "base_bdevs_list": [ 00:12:18.159 { 00:12:18.160 "name": "BaseBdev1", 00:12:18.160 "uuid": "2bc033e0-b289-514a-a940-f10016fa71a0", 00:12:18.160 "is_configured": true, 00:12:18.160 "data_offset": 0, 00:12:18.160 "data_size": 65536 00:12:18.160 }, 00:12:18.160 { 00:12:18.160 "name": "BaseBdev2", 00:12:18.160 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:18.160 "is_configured": true, 00:12:18.160 "data_offset": 0, 00:12:18.160 "data_size": 65536 00:12:18.160 }, 00:12:18.160 { 00:12:18.160 "name": "BaseBdev3", 00:12:18.160 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:18.160 "is_configured": true, 00:12:18.160 "data_offset": 0, 00:12:18.160 "data_size": 65536 00:12:18.160 } 00:12:18.160 ] 00:12:18.160 }' 00:12:18.160 14:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.160 14:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.420 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.420 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.420 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.420 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:18.420 [2024-10-01 14:37:10.058191] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.420 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.420 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:12:18.420 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.420 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:18.420 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.420 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:18.680 [2024-10-01 14:37:10.314098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:18.680 /dev/nbd0 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.680 1+0 records in 00:12:18.680 1+0 records out 00:12:18.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240427 s, 17.0 MB/s 00:12:18.680 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.941 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:18.941 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.941 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:18.941 14:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:18.941 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:18.941 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.941 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:12:18.941 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:12:18.941 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:12:18.941 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:12:19.202 512+0 records in 00:12:19.202 512+0 records out 00:12:19.202 67108864 bytes (67 MB, 64 MiB) copied, 0.512004 s, 131 MB/s 00:12:19.203 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:19.203 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.203 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:19.203 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:19.203 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:19.203 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.203 14:37:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:19.773 [2024-10-01 14:37:11.155270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.773 [2024-10-01 14:37:11.167623] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.773 "name": "raid_bdev1", 00:12:19.773 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:19.773 "strip_size_kb": 64, 00:12:19.773 "state": "online", 00:12:19.773 "raid_level": "raid5f", 00:12:19.773 "superblock": false, 00:12:19.773 "num_base_bdevs": 3, 00:12:19.773 "num_base_bdevs_discovered": 2, 00:12:19.773 "num_base_bdevs_operational": 2, 00:12:19.773 "base_bdevs_list": [ 00:12:19.773 { 00:12:19.773 "name": null, 00:12:19.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.773 "is_configured": false, 00:12:19.773 "data_offset": 0, 00:12:19.773 "data_size": 65536 00:12:19.773 }, 00:12:19.773 { 00:12:19.773 "name": "BaseBdev2", 00:12:19.773 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:19.773 "is_configured": true, 00:12:19.773 "data_offset": 0, 00:12:19.773 "data_size": 65536 00:12:19.773 }, 00:12:19.773 { 00:12:19.773 "name": "BaseBdev3", 00:12:19.773 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:19.773 "is_configured": true, 00:12:19.773 "data_offset": 0, 00:12:19.773 "data_size": 65536 00:12:19.773 } 00:12:19.773 ] 00:12:19.773 }' 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.773 14:37:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.035 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:20.035 14:37:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.035 14:37:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.035 [2024-10-01 14:37:11.507720] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:20.035 [2024-10-01 14:37:11.517829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:12:20.035 14:37:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.035 14:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:20.036 [2024-10-01 14:37:11.523402] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.972 "name": "raid_bdev1", 00:12:20.972 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:20.972 "strip_size_kb": 64, 00:12:20.972 "state": "online", 00:12:20.972 "raid_level": "raid5f", 00:12:20.972 "superblock": false, 00:12:20.972 "num_base_bdevs": 3, 00:12:20.972 "num_base_bdevs_discovered": 3, 00:12:20.972 "num_base_bdevs_operational": 3, 00:12:20.972 "process": { 00:12:20.972 "type": "rebuild", 00:12:20.972 "target": "spare", 00:12:20.972 "progress": { 00:12:20.972 "blocks": 18432, 00:12:20.972 "percent": 14 00:12:20.972 } 00:12:20.972 }, 00:12:20.972 "base_bdevs_list": [ 00:12:20.972 { 00:12:20.972 "name": "spare", 00:12:20.972 "uuid": "bc87030b-fa63-5425-89a1-3dafc6403d4a", 00:12:20.972 "is_configured": true, 00:12:20.972 "data_offset": 0, 00:12:20.972 "data_size": 65536 00:12:20.972 }, 00:12:20.972 { 00:12:20.972 "name": "BaseBdev2", 00:12:20.972 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:20.972 "is_configured": true, 00:12:20.972 "data_offset": 0, 00:12:20.972 "data_size": 65536 00:12:20.972 }, 00:12:20.972 { 00:12:20.972 "name": "BaseBdev3", 00:12:20.972 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:20.972 "is_configured": true, 00:12:20.972 "data_offset": 0, 00:12:20.972 "data_size": 65536 00:12:20.972 } 00:12:20.972 ] 00:12:20.972 }' 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.972 14:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.972 [2024-10-01 14:37:12.644830] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:21.233 [2024-10-01 14:37:12.735077] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:21.233 [2024-10-01 14:37:12.735147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.233 [2024-10-01 14:37:12.735166] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:21.233 [2024-10-01 14:37:12.735174] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.233 "name": "raid_bdev1", 00:12:21.233 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:21.233 "strip_size_kb": 64, 00:12:21.233 "state": "online", 00:12:21.233 "raid_level": "raid5f", 00:12:21.233 "superblock": false, 00:12:21.233 "num_base_bdevs": 3, 00:12:21.233 "num_base_bdevs_discovered": 2, 00:12:21.233 "num_base_bdevs_operational": 2, 00:12:21.233 "base_bdevs_list": [ 00:12:21.233 { 00:12:21.233 "name": null, 00:12:21.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.233 "is_configured": false, 00:12:21.233 "data_offset": 0, 00:12:21.233 "data_size": 65536 00:12:21.233 }, 00:12:21.233 { 00:12:21.233 "name": "BaseBdev2", 00:12:21.233 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:21.233 "is_configured": true, 00:12:21.233 "data_offset": 0, 00:12:21.233 "data_size": 65536 00:12:21.233 }, 00:12:21.233 { 00:12:21.233 "name": "BaseBdev3", 00:12:21.233 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:21.233 "is_configured": true, 00:12:21.233 "data_offset": 0, 00:12:21.233 "data_size": 65536 00:12:21.233 } 00:12:21.233 ] 00:12:21.233 }' 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.233 14:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.495 "name": "raid_bdev1", 00:12:21.495 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:21.495 "strip_size_kb": 64, 00:12:21.495 "state": "online", 00:12:21.495 "raid_level": "raid5f", 00:12:21.495 "superblock": false, 00:12:21.495 "num_base_bdevs": 3, 00:12:21.495 "num_base_bdevs_discovered": 2, 00:12:21.495 "num_base_bdevs_operational": 2, 00:12:21.495 "base_bdevs_list": [ 00:12:21.495 { 00:12:21.495 "name": null, 00:12:21.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.495 "is_configured": false, 00:12:21.495 "data_offset": 0, 00:12:21.495 "data_size": 65536 00:12:21.495 }, 00:12:21.495 { 00:12:21.495 "name": "BaseBdev2", 00:12:21.495 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:21.495 "is_configured": true, 00:12:21.495 "data_offset": 0, 00:12:21.495 "data_size": 65536 00:12:21.495 }, 00:12:21.495 { 00:12:21.495 "name": "BaseBdev3", 00:12:21.495 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:21.495 "is_configured": true, 00:12:21.495 "data_offset": 0, 00:12:21.495 "data_size": 65536 00:12:21.495 } 00:12:21.495 ] 00:12:21.495 }' 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.495 14:37:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.495 [2024-10-01 14:37:13.173315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:21.755 [2024-10-01 14:37:13.182734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:12:21.755 14:37:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.755 14:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:21.755 [2024-10-01 14:37:13.188338] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.694 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.694 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.694 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.694 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.694 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.694 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.694 14:37:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.694 14:37:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.694 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.694 14:37:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.695 "name": "raid_bdev1", 00:12:22.695 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:22.695 "strip_size_kb": 64, 00:12:22.695 "state": "online", 00:12:22.695 "raid_level": "raid5f", 00:12:22.695 "superblock": false, 00:12:22.695 "num_base_bdevs": 3, 00:12:22.695 "num_base_bdevs_discovered": 3, 00:12:22.695 "num_base_bdevs_operational": 3, 00:12:22.695 "process": { 00:12:22.695 "type": "rebuild", 00:12:22.695 "target": "spare", 00:12:22.695 "progress": { 00:12:22.695 "blocks": 18432, 00:12:22.695 "percent": 14 00:12:22.695 } 00:12:22.695 }, 00:12:22.695 "base_bdevs_list": [ 00:12:22.695 { 00:12:22.695 "name": "spare", 00:12:22.695 "uuid": "bc87030b-fa63-5425-89a1-3dafc6403d4a", 00:12:22.695 "is_configured": true, 00:12:22.695 "data_offset": 0, 00:12:22.695 "data_size": 65536 00:12:22.695 }, 00:12:22.695 { 00:12:22.695 "name": "BaseBdev2", 00:12:22.695 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:22.695 "is_configured": true, 00:12:22.695 "data_offset": 0, 00:12:22.695 "data_size": 65536 00:12:22.695 }, 00:12:22.695 { 00:12:22.695 "name": "BaseBdev3", 00:12:22.695 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:22.695 "is_configured": true, 00:12:22.695 "data_offset": 0, 00:12:22.695 "data_size": 65536 00:12:22.695 } 00:12:22.695 ] 00:12:22.695 }' 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=454 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.695 "name": "raid_bdev1", 00:12:22.695 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:22.695 "strip_size_kb": 64, 00:12:22.695 "state": "online", 00:12:22.695 "raid_level": "raid5f", 00:12:22.695 "superblock": false, 00:12:22.695 "num_base_bdevs": 3, 00:12:22.695 "num_base_bdevs_discovered": 3, 00:12:22.695 "num_base_bdevs_operational": 3, 00:12:22.695 "process": { 00:12:22.695 "type": "rebuild", 00:12:22.695 "target": "spare", 00:12:22.695 "progress": { 00:12:22.695 "blocks": 22528, 00:12:22.695 "percent": 17 00:12:22.695 } 00:12:22.695 }, 00:12:22.695 "base_bdevs_list": [ 00:12:22.695 { 00:12:22.695 "name": "spare", 00:12:22.695 "uuid": "bc87030b-fa63-5425-89a1-3dafc6403d4a", 00:12:22.695 "is_configured": true, 00:12:22.695 "data_offset": 0, 00:12:22.695 "data_size": 65536 00:12:22.695 }, 00:12:22.695 { 00:12:22.695 "name": "BaseBdev2", 00:12:22.695 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:22.695 "is_configured": true, 00:12:22.695 "data_offset": 0, 00:12:22.695 "data_size": 65536 00:12:22.695 }, 00:12:22.695 { 00:12:22.695 "name": "BaseBdev3", 00:12:22.695 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:22.695 "is_configured": true, 00:12:22.695 "data_offset": 0, 00:12:22.695 "data_size": 65536 00:12:22.695 } 00:12:22.695 ] 00:12:22.695 }' 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.695 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.955 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.955 14:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:23.896 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:23.896 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.896 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.896 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.896 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.896 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.896 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.896 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.896 14:37:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.896 14:37:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.896 14:37:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.896 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.896 "name": "raid_bdev1", 00:12:23.896 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:23.896 "strip_size_kb": 64, 00:12:23.896 "state": "online", 00:12:23.896 "raid_level": "raid5f", 00:12:23.896 "superblock": false, 00:12:23.896 "num_base_bdevs": 3, 00:12:23.896 "num_base_bdevs_discovered": 3, 00:12:23.896 "num_base_bdevs_operational": 3, 00:12:23.896 "process": { 00:12:23.896 "type": "rebuild", 00:12:23.896 "target": "spare", 00:12:23.896 "progress": { 00:12:23.896 "blocks": 43008, 00:12:23.896 "percent": 32 00:12:23.896 } 00:12:23.896 }, 00:12:23.896 "base_bdevs_list": [ 00:12:23.896 { 00:12:23.896 "name": "spare", 00:12:23.896 "uuid": "bc87030b-fa63-5425-89a1-3dafc6403d4a", 00:12:23.896 "is_configured": true, 00:12:23.896 "data_offset": 0, 00:12:23.896 "data_size": 65536 00:12:23.896 }, 00:12:23.896 { 00:12:23.896 "name": "BaseBdev2", 00:12:23.896 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:23.896 "is_configured": true, 00:12:23.896 "data_offset": 0, 00:12:23.896 "data_size": 65536 00:12:23.896 }, 00:12:23.896 { 00:12:23.896 "name": "BaseBdev3", 00:12:23.896 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:23.896 "is_configured": true, 00:12:23.896 "data_offset": 0, 00:12:23.896 "data_size": 65536 00:12:23.896 } 00:12:23.896 ] 00:12:23.896 }' 00:12:23.896 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.897 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.897 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.897 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.897 14:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:24.840 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:24.840 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.840 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.840 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.840 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.840 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.840 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.840 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.840 14:37:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.840 14:37:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.840 14:37:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.101 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.101 "name": "raid_bdev1", 00:12:25.101 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:25.101 "strip_size_kb": 64, 00:12:25.102 "state": "online", 00:12:25.102 "raid_level": "raid5f", 00:12:25.102 "superblock": false, 00:12:25.102 "num_base_bdevs": 3, 00:12:25.102 "num_base_bdevs_discovered": 3, 00:12:25.102 "num_base_bdevs_operational": 3, 00:12:25.102 "process": { 00:12:25.102 "type": "rebuild", 00:12:25.102 "target": "spare", 00:12:25.102 "progress": { 00:12:25.102 "blocks": 65536, 00:12:25.102 "percent": 50 00:12:25.102 } 00:12:25.102 }, 00:12:25.102 "base_bdevs_list": [ 00:12:25.102 { 00:12:25.102 "name": "spare", 00:12:25.102 "uuid": "bc87030b-fa63-5425-89a1-3dafc6403d4a", 00:12:25.102 "is_configured": true, 00:12:25.102 "data_offset": 0, 00:12:25.102 "data_size": 65536 00:12:25.102 }, 00:12:25.102 { 00:12:25.102 "name": "BaseBdev2", 00:12:25.102 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:25.102 "is_configured": true, 00:12:25.102 "data_offset": 0, 00:12:25.102 "data_size": 65536 00:12:25.102 }, 00:12:25.102 { 00:12:25.102 "name": "BaseBdev3", 00:12:25.102 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:25.102 "is_configured": true, 00:12:25.102 "data_offset": 0, 00:12:25.102 "data_size": 65536 00:12:25.102 } 00:12:25.102 ] 00:12:25.102 }' 00:12:25.102 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.102 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.102 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.102 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.102 14:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.043 "name": "raid_bdev1", 00:12:26.043 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:26.043 "strip_size_kb": 64, 00:12:26.043 "state": "online", 00:12:26.043 "raid_level": "raid5f", 00:12:26.043 "superblock": false, 00:12:26.043 "num_base_bdevs": 3, 00:12:26.043 "num_base_bdevs_discovered": 3, 00:12:26.043 "num_base_bdevs_operational": 3, 00:12:26.043 "process": { 00:12:26.043 "type": "rebuild", 00:12:26.043 "target": "spare", 00:12:26.043 "progress": { 00:12:26.043 "blocks": 90112, 00:12:26.043 "percent": 68 00:12:26.043 } 00:12:26.043 }, 00:12:26.043 "base_bdevs_list": [ 00:12:26.043 { 00:12:26.043 "name": "spare", 00:12:26.043 "uuid": "bc87030b-fa63-5425-89a1-3dafc6403d4a", 00:12:26.043 "is_configured": true, 00:12:26.043 "data_offset": 0, 00:12:26.043 "data_size": 65536 00:12:26.043 }, 00:12:26.043 { 00:12:26.043 "name": "BaseBdev2", 00:12:26.043 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:26.043 "is_configured": true, 00:12:26.043 "data_offset": 0, 00:12:26.043 "data_size": 65536 00:12:26.043 }, 00:12:26.043 { 00:12:26.043 "name": "BaseBdev3", 00:12:26.043 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:26.043 "is_configured": true, 00:12:26.043 "data_offset": 0, 00:12:26.043 "data_size": 65536 00:12:26.043 } 00:12:26.043 ] 00:12:26.043 }' 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.043 14:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.467 "name": "raid_bdev1", 00:12:27.467 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:27.467 "strip_size_kb": 64, 00:12:27.467 "state": "online", 00:12:27.467 "raid_level": "raid5f", 00:12:27.467 "superblock": false, 00:12:27.467 "num_base_bdevs": 3, 00:12:27.467 "num_base_bdevs_discovered": 3, 00:12:27.467 "num_base_bdevs_operational": 3, 00:12:27.467 "process": { 00:12:27.467 "type": "rebuild", 00:12:27.467 "target": "spare", 00:12:27.467 "progress": { 00:12:27.467 "blocks": 110592, 00:12:27.467 "percent": 84 00:12:27.467 } 00:12:27.467 }, 00:12:27.467 "base_bdevs_list": [ 00:12:27.467 { 00:12:27.467 "name": "spare", 00:12:27.467 "uuid": "bc87030b-fa63-5425-89a1-3dafc6403d4a", 00:12:27.467 "is_configured": true, 00:12:27.467 "data_offset": 0, 00:12:27.467 "data_size": 65536 00:12:27.467 }, 00:12:27.467 { 00:12:27.467 "name": "BaseBdev2", 00:12:27.467 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:27.467 "is_configured": true, 00:12:27.467 "data_offset": 0, 00:12:27.467 "data_size": 65536 00:12:27.467 }, 00:12:27.467 { 00:12:27.467 "name": "BaseBdev3", 00:12:27.467 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:27.467 "is_configured": true, 00:12:27.467 "data_offset": 0, 00:12:27.467 "data_size": 65536 00:12:27.467 } 00:12:27.467 ] 00:12:27.467 }' 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.467 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.468 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.468 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.468 14:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:28.040 [2024-10-01 14:37:19.649165] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:28.040 [2024-10-01 14:37:19.649444] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:28.040 [2024-10-01 14:37:19.649496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.301 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.301 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.301 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.301 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.301 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.301 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.301 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.301 14:37:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.301 14:37:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.301 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.301 14:37:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.302 "name": "raid_bdev1", 00:12:28.302 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:28.302 "strip_size_kb": 64, 00:12:28.302 "state": "online", 00:12:28.302 "raid_level": "raid5f", 00:12:28.302 "superblock": false, 00:12:28.302 "num_base_bdevs": 3, 00:12:28.302 "num_base_bdevs_discovered": 3, 00:12:28.302 "num_base_bdevs_operational": 3, 00:12:28.302 "base_bdevs_list": [ 00:12:28.302 { 00:12:28.302 "name": "spare", 00:12:28.302 "uuid": "bc87030b-fa63-5425-89a1-3dafc6403d4a", 00:12:28.302 "is_configured": true, 00:12:28.302 "data_offset": 0, 00:12:28.302 "data_size": 65536 00:12:28.302 }, 00:12:28.302 { 00:12:28.302 "name": "BaseBdev2", 00:12:28.302 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:28.302 "is_configured": true, 00:12:28.302 "data_offset": 0, 00:12:28.302 "data_size": 65536 00:12:28.302 }, 00:12:28.302 { 00:12:28.302 "name": "BaseBdev3", 00:12:28.302 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:28.302 "is_configured": true, 00:12:28.302 "data_offset": 0, 00:12:28.302 "data_size": 65536 00:12:28.302 } 00:12:28.302 ] 00:12:28.302 }' 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.302 "name": "raid_bdev1", 00:12:28.302 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:28.302 "strip_size_kb": 64, 00:12:28.302 "state": "online", 00:12:28.302 "raid_level": "raid5f", 00:12:28.302 "superblock": false, 00:12:28.302 "num_base_bdevs": 3, 00:12:28.302 "num_base_bdevs_discovered": 3, 00:12:28.302 "num_base_bdevs_operational": 3, 00:12:28.302 "base_bdevs_list": [ 00:12:28.302 { 00:12:28.302 "name": "spare", 00:12:28.302 "uuid": "bc87030b-fa63-5425-89a1-3dafc6403d4a", 00:12:28.302 "is_configured": true, 00:12:28.302 "data_offset": 0, 00:12:28.302 "data_size": 65536 00:12:28.302 }, 00:12:28.302 { 00:12:28.302 "name": "BaseBdev2", 00:12:28.302 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:28.302 "is_configured": true, 00:12:28.302 "data_offset": 0, 00:12:28.302 "data_size": 65536 00:12:28.302 }, 00:12:28.302 { 00:12:28.302 "name": "BaseBdev3", 00:12:28.302 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:28.302 "is_configured": true, 00:12:28.302 "data_offset": 0, 00:12:28.302 "data_size": 65536 00:12:28.302 } 00:12:28.302 ] 00:12:28.302 }' 00:12:28.302 14:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.562 "name": "raid_bdev1", 00:12:28.562 "uuid": "0243799e-d9eb-4594-a8fe-aa79e20ba537", 00:12:28.562 "strip_size_kb": 64, 00:12:28.562 "state": "online", 00:12:28.562 "raid_level": "raid5f", 00:12:28.562 "superblock": false, 00:12:28.562 "num_base_bdevs": 3, 00:12:28.562 "num_base_bdevs_discovered": 3, 00:12:28.562 "num_base_bdevs_operational": 3, 00:12:28.562 "base_bdevs_list": [ 00:12:28.562 { 00:12:28.562 "name": "spare", 00:12:28.562 "uuid": "bc87030b-fa63-5425-89a1-3dafc6403d4a", 00:12:28.562 "is_configured": true, 00:12:28.562 "data_offset": 0, 00:12:28.562 "data_size": 65536 00:12:28.562 }, 00:12:28.562 { 00:12:28.562 "name": "BaseBdev2", 00:12:28.562 "uuid": "1187e998-f2b6-5586-98c6-f520bc97bf9f", 00:12:28.562 "is_configured": true, 00:12:28.562 "data_offset": 0, 00:12:28.562 "data_size": 65536 00:12:28.562 }, 00:12:28.562 { 00:12:28.562 "name": "BaseBdev3", 00:12:28.562 "uuid": "0fa9d71e-f4ca-5ec4-9c56-887012146f18", 00:12:28.562 "is_configured": true, 00:12:28.562 "data_offset": 0, 00:12:28.562 "data_size": 65536 00:12:28.562 } 00:12:28.562 ] 00:12:28.562 }' 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.562 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.822 [2024-10-01 14:37:20.355690] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.822 [2024-10-01 14:37:20.355729] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.822 [2024-10-01 14:37:20.355807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.822 [2024-10-01 14:37:20.355887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.822 [2024-10-01 14:37:20.355903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:28.822 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:29.082 /dev/nbd0 00:12:29.082 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.083 1+0 records in 00:12:29.083 1+0 records out 00:12:29.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506352 s, 8.1 MB/s 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:29.083 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:29.344 /dev/nbd1 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.344 1+0 records in 00:12:29.344 1+0 records out 00:12:29.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523013 s, 7.8 MB/s 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:29.344 14:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.646 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 79555 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 79555 ']' 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 79555 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79555 00:12:29.906 killing process with pid 79555 00:12:29.906 Received shutdown signal, test time was about 60.000000 seconds 00:12:29.906 00:12:29.906 Latency(us) 00:12:29.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.906 =================================================================================================================== 00:12:29.906 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79555' 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 79555 00:12:29.906 [2024-10-01 14:37:21.539856] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.906 14:37:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 79555 00:12:30.166 [2024-10-01 14:37:21.789641] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:31.101 00:12:31.101 real 0m13.989s 00:12:31.101 user 0m16.889s 00:12:31.101 sys 0m1.611s 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.101 ************************************ 00:12:31.101 END TEST raid5f_rebuild_test 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.101 ************************************ 00:12:31.101 14:37:22 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:12:31.101 14:37:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:31.101 14:37:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.101 14:37:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.101 ************************************ 00:12:31.101 START TEST raid5f_rebuild_test_sb 00:12:31.101 ************************************ 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:12:31.101 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:31.102 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:31.102 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=79985 00:12:31.102 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 79985 00:12:31.102 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79985 ']' 00:12:31.102 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:31.102 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.102 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.102 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.102 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.102 14:37:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.102 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:31.102 Zero copy mechanism will not be used. 00:12:31.102 [2024-10-01 14:37:22.748758] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:12:31.102 [2024-10-01 14:37:22.748887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79985 ] 00:12:31.363 [2024-10-01 14:37:22.896417] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.622 [2024-10-01 14:37:23.088016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.622 [2024-10-01 14:37:23.225958] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.622 [2024-10-01 14:37:23.226004] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.192 BaseBdev1_malloc 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.192 [2024-10-01 14:37:23.627747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:32.192 [2024-10-01 14:37:23.627810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.192 [2024-10-01 14:37:23.627831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:32.192 [2024-10-01 14:37:23.627844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.192 [2024-10-01 14:37:23.630003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.192 [2024-10-01 14:37:23.630041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:32.192 BaseBdev1 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.192 BaseBdev2_malloc 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.192 [2024-10-01 14:37:23.676173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:32.192 [2024-10-01 14:37:23.676238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.192 [2024-10-01 14:37:23.676259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:32.192 [2024-10-01 14:37:23.676272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.192 [2024-10-01 14:37:23.678425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.192 [2024-10-01 14:37:23.678467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:32.192 BaseBdev2 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.192 BaseBdev3_malloc 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.192 [2024-10-01 14:37:23.712393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:32.192 [2024-10-01 14:37:23.712452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.192 [2024-10-01 14:37:23.712473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:32.192 [2024-10-01 14:37:23.712484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.192 [2024-10-01 14:37:23.714648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.192 [2024-10-01 14:37:23.714685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:32.192 BaseBdev3 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.192 spare_malloc 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.192 spare_delay 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.192 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.193 [2024-10-01 14:37:23.760751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:32.193 [2024-10-01 14:37:23.760805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.193 [2024-10-01 14:37:23.760823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:32.193 [2024-10-01 14:37:23.760834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.193 [2024-10-01 14:37:23.763004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.193 [2024-10-01 14:37:23.763043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:32.193 spare 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.193 [2024-10-01 14:37:23.768838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.193 [2024-10-01 14:37:23.770701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.193 [2024-10-01 14:37:23.770783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:32.193 [2024-10-01 14:37:23.770956] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:32.193 [2024-10-01 14:37:23.770966] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:32.193 [2024-10-01 14:37:23.771243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:32.193 [2024-10-01 14:37:23.775072] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:32.193 [2024-10-01 14:37:23.775096] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:32.193 [2024-10-01 14:37:23.775293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.193 "name": "raid_bdev1", 00:12:32.193 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:32.193 "strip_size_kb": 64, 00:12:32.193 "state": "online", 00:12:32.193 "raid_level": "raid5f", 00:12:32.193 "superblock": true, 00:12:32.193 "num_base_bdevs": 3, 00:12:32.193 "num_base_bdevs_discovered": 3, 00:12:32.193 "num_base_bdevs_operational": 3, 00:12:32.193 "base_bdevs_list": [ 00:12:32.193 { 00:12:32.193 "name": "BaseBdev1", 00:12:32.193 "uuid": "83c9f2f0-20ea-5a53-84e1-e918ea5fb18d", 00:12:32.193 "is_configured": true, 00:12:32.193 "data_offset": 2048, 00:12:32.193 "data_size": 63488 00:12:32.193 }, 00:12:32.193 { 00:12:32.193 "name": "BaseBdev2", 00:12:32.193 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:32.193 "is_configured": true, 00:12:32.193 "data_offset": 2048, 00:12:32.193 "data_size": 63488 00:12:32.193 }, 00:12:32.193 { 00:12:32.193 "name": "BaseBdev3", 00:12:32.193 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:32.193 "is_configured": true, 00:12:32.193 "data_offset": 2048, 00:12:32.193 "data_size": 63488 00:12:32.193 } 00:12:32.193 ] 00:12:32.193 }' 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.193 14:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.453 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:32.453 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.453 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.453 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.453 [2024-10-01 14:37:24.095606] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.453 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.453 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:12:32.453 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.453 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.453 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.453 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:32.714 [2024-10-01 14:37:24.347498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:32.714 /dev/nbd0 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.714 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.974 1+0 records in 00:12:32.974 1+0 records out 00:12:32.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303295 s, 13.5 MB/s 00:12:32.974 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.974 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:32.974 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.974 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.974 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:32.974 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.974 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.974 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:12:32.974 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:12:32.974 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:12:32.974 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:12:33.234 496+0 records in 00:12:33.234 496+0 records out 00:12:33.234 65011712 bytes (65 MB, 62 MiB) copied, 0.468516 s, 139 MB/s 00:12:33.234 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:33.234 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.234 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:33.234 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.234 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:33.234 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.234 14:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:33.494 [2024-10-01 14:37:25.066531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.494 [2024-10-01 14:37:25.098735] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.494 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.495 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.495 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.495 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.495 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.495 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.495 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.495 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.495 "name": "raid_bdev1", 00:12:33.495 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:33.495 "strip_size_kb": 64, 00:12:33.495 "state": "online", 00:12:33.495 "raid_level": "raid5f", 00:12:33.495 "superblock": true, 00:12:33.495 "num_base_bdevs": 3, 00:12:33.495 "num_base_bdevs_discovered": 2, 00:12:33.495 "num_base_bdevs_operational": 2, 00:12:33.495 "base_bdevs_list": [ 00:12:33.495 { 00:12:33.495 "name": null, 00:12:33.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.495 "is_configured": false, 00:12:33.495 "data_offset": 0, 00:12:33.495 "data_size": 63488 00:12:33.495 }, 00:12:33.495 { 00:12:33.495 "name": "BaseBdev2", 00:12:33.495 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:33.495 "is_configured": true, 00:12:33.495 "data_offset": 2048, 00:12:33.495 "data_size": 63488 00:12:33.495 }, 00:12:33.495 { 00:12:33.495 "name": "BaseBdev3", 00:12:33.495 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:33.495 "is_configured": true, 00:12:33.495 "data_offset": 2048, 00:12:33.495 "data_size": 63488 00:12:33.495 } 00:12:33.495 ] 00:12:33.495 }' 00:12:33.495 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.495 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.068 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:34.068 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.068 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.068 [2024-10-01 14:37:25.458857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:34.068 [2024-10-01 14:37:25.469062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:12:34.068 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.068 14:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:34.068 [2024-10-01 14:37:25.474586] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:35.009 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.009 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.009 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.009 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.009 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.009 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.010 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.010 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.010 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.010 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.010 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.010 "name": "raid_bdev1", 00:12:35.010 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:35.010 "strip_size_kb": 64, 00:12:35.010 "state": "online", 00:12:35.010 "raid_level": "raid5f", 00:12:35.010 "superblock": true, 00:12:35.010 "num_base_bdevs": 3, 00:12:35.010 "num_base_bdevs_discovered": 3, 00:12:35.010 "num_base_bdevs_operational": 3, 00:12:35.010 "process": { 00:12:35.010 "type": "rebuild", 00:12:35.010 "target": "spare", 00:12:35.010 "progress": { 00:12:35.010 "blocks": 18432, 00:12:35.010 "percent": 14 00:12:35.010 } 00:12:35.010 }, 00:12:35.010 "base_bdevs_list": [ 00:12:35.010 { 00:12:35.010 "name": "spare", 00:12:35.010 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:35.010 "is_configured": true, 00:12:35.010 "data_offset": 2048, 00:12:35.010 "data_size": 63488 00:12:35.010 }, 00:12:35.010 { 00:12:35.010 "name": "BaseBdev2", 00:12:35.010 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:35.010 "is_configured": true, 00:12:35.010 "data_offset": 2048, 00:12:35.010 "data_size": 63488 00:12:35.010 }, 00:12:35.010 { 00:12:35.010 "name": "BaseBdev3", 00:12:35.010 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:35.010 "is_configured": true, 00:12:35.010 "data_offset": 2048, 00:12:35.010 "data_size": 63488 00:12:35.010 } 00:12:35.010 ] 00:12:35.010 }' 00:12:35.010 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.010 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.010 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.010 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.010 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:35.010 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.010 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.010 [2024-10-01 14:37:26.588433] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.010 [2024-10-01 14:37:26.686437] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:35.010 [2024-10-01 14:37:26.686515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.010 [2024-10-01 14:37:26.686534] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.010 [2024-10-01 14:37:26.686543] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:35.270 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.270 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:35.270 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.270 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.270 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:35.270 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.270 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:35.270 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.270 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.270 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.270 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.270 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.271 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.271 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.271 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.271 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.271 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.271 "name": "raid_bdev1", 00:12:35.271 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:35.271 "strip_size_kb": 64, 00:12:35.271 "state": "online", 00:12:35.271 "raid_level": "raid5f", 00:12:35.271 "superblock": true, 00:12:35.271 "num_base_bdevs": 3, 00:12:35.271 "num_base_bdevs_discovered": 2, 00:12:35.271 "num_base_bdevs_operational": 2, 00:12:35.271 "base_bdevs_list": [ 00:12:35.271 { 00:12:35.271 "name": null, 00:12:35.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.271 "is_configured": false, 00:12:35.271 "data_offset": 0, 00:12:35.271 "data_size": 63488 00:12:35.271 }, 00:12:35.271 { 00:12:35.271 "name": "BaseBdev2", 00:12:35.271 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:35.271 "is_configured": true, 00:12:35.271 "data_offset": 2048, 00:12:35.271 "data_size": 63488 00:12:35.271 }, 00:12:35.271 { 00:12:35.271 "name": "BaseBdev3", 00:12:35.271 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:35.271 "is_configured": true, 00:12:35.271 "data_offset": 2048, 00:12:35.271 "data_size": 63488 00:12:35.271 } 00:12:35.271 ] 00:12:35.271 }' 00:12:35.271 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.271 14:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.531 "name": "raid_bdev1", 00:12:35.531 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:35.531 "strip_size_kb": 64, 00:12:35.531 "state": "online", 00:12:35.531 "raid_level": "raid5f", 00:12:35.531 "superblock": true, 00:12:35.531 "num_base_bdevs": 3, 00:12:35.531 "num_base_bdevs_discovered": 2, 00:12:35.531 "num_base_bdevs_operational": 2, 00:12:35.531 "base_bdevs_list": [ 00:12:35.531 { 00:12:35.531 "name": null, 00:12:35.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.531 "is_configured": false, 00:12:35.531 "data_offset": 0, 00:12:35.531 "data_size": 63488 00:12:35.531 }, 00:12:35.531 { 00:12:35.531 "name": "BaseBdev2", 00:12:35.531 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:35.531 "is_configured": true, 00:12:35.531 "data_offset": 2048, 00:12:35.531 "data_size": 63488 00:12:35.531 }, 00:12:35.531 { 00:12:35.531 "name": "BaseBdev3", 00:12:35.531 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:35.531 "is_configured": true, 00:12:35.531 "data_offset": 2048, 00:12:35.531 "data_size": 63488 00:12:35.531 } 00:12:35.531 ] 00:12:35.531 }' 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.531 [2024-10-01 14:37:27.169471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.531 [2024-10-01 14:37:27.178892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.531 14:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:35.531 [2024-10-01 14:37:27.185587] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:36.910 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.910 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.910 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.910 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.910 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.910 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.910 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.910 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.910 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.910 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.910 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.910 "name": "raid_bdev1", 00:12:36.910 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:36.910 "strip_size_kb": 64, 00:12:36.910 "state": "online", 00:12:36.910 "raid_level": "raid5f", 00:12:36.910 "superblock": true, 00:12:36.910 "num_base_bdevs": 3, 00:12:36.910 "num_base_bdevs_discovered": 3, 00:12:36.911 "num_base_bdevs_operational": 3, 00:12:36.911 "process": { 00:12:36.911 "type": "rebuild", 00:12:36.911 "target": "spare", 00:12:36.911 "progress": { 00:12:36.911 "blocks": 18432, 00:12:36.911 "percent": 14 00:12:36.911 } 00:12:36.911 }, 00:12:36.911 "base_bdevs_list": [ 00:12:36.911 { 00:12:36.911 "name": "spare", 00:12:36.911 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:36.911 "is_configured": true, 00:12:36.911 "data_offset": 2048, 00:12:36.911 "data_size": 63488 00:12:36.911 }, 00:12:36.911 { 00:12:36.911 "name": "BaseBdev2", 00:12:36.911 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:36.911 "is_configured": true, 00:12:36.911 "data_offset": 2048, 00:12:36.911 "data_size": 63488 00:12:36.911 }, 00:12:36.911 { 00:12:36.911 "name": "BaseBdev3", 00:12:36.911 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:36.911 "is_configured": true, 00:12:36.911 "data_offset": 2048, 00:12:36.911 "data_size": 63488 00:12:36.911 } 00:12:36.911 ] 00:12:36.911 }' 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:36.911 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.911 "name": "raid_bdev1", 00:12:36.911 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:36.911 "strip_size_kb": 64, 00:12:36.911 "state": "online", 00:12:36.911 "raid_level": "raid5f", 00:12:36.911 "superblock": true, 00:12:36.911 "num_base_bdevs": 3, 00:12:36.911 "num_base_bdevs_discovered": 3, 00:12:36.911 "num_base_bdevs_operational": 3, 00:12:36.911 "process": { 00:12:36.911 "type": "rebuild", 00:12:36.911 "target": "spare", 00:12:36.911 "progress": { 00:12:36.911 "blocks": 20480, 00:12:36.911 "percent": 16 00:12:36.911 } 00:12:36.911 }, 00:12:36.911 "base_bdevs_list": [ 00:12:36.911 { 00:12:36.911 "name": "spare", 00:12:36.911 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:36.911 "is_configured": true, 00:12:36.911 "data_offset": 2048, 00:12:36.911 "data_size": 63488 00:12:36.911 }, 00:12:36.911 { 00:12:36.911 "name": "BaseBdev2", 00:12:36.911 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:36.911 "is_configured": true, 00:12:36.911 "data_offset": 2048, 00:12:36.911 "data_size": 63488 00:12:36.911 }, 00:12:36.911 { 00:12:36.911 "name": "BaseBdev3", 00:12:36.911 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:36.911 "is_configured": true, 00:12:36.911 "data_offset": 2048, 00:12:36.911 "data_size": 63488 00:12:36.911 } 00:12:36.911 ] 00:12:36.911 }' 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.911 14:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.855 "name": "raid_bdev1", 00:12:37.855 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:37.855 "strip_size_kb": 64, 00:12:37.855 "state": "online", 00:12:37.855 "raid_level": "raid5f", 00:12:37.855 "superblock": true, 00:12:37.855 "num_base_bdevs": 3, 00:12:37.855 "num_base_bdevs_discovered": 3, 00:12:37.855 "num_base_bdevs_operational": 3, 00:12:37.855 "process": { 00:12:37.855 "type": "rebuild", 00:12:37.855 "target": "spare", 00:12:37.855 "progress": { 00:12:37.855 "blocks": 43008, 00:12:37.855 "percent": 33 00:12:37.855 } 00:12:37.855 }, 00:12:37.855 "base_bdevs_list": [ 00:12:37.855 { 00:12:37.855 "name": "spare", 00:12:37.855 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:37.855 "is_configured": true, 00:12:37.855 "data_offset": 2048, 00:12:37.855 "data_size": 63488 00:12:37.855 }, 00:12:37.855 { 00:12:37.855 "name": "BaseBdev2", 00:12:37.855 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:37.855 "is_configured": true, 00:12:37.855 "data_offset": 2048, 00:12:37.855 "data_size": 63488 00:12:37.855 }, 00:12:37.855 { 00:12:37.855 "name": "BaseBdev3", 00:12:37.855 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:37.855 "is_configured": true, 00:12:37.855 "data_offset": 2048, 00:12:37.855 "data_size": 63488 00:12:37.855 } 00:12:37.855 ] 00:12:37.855 }' 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.855 14:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:38.798 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:38.798 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.798 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.798 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.798 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.798 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.798 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.798 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.798 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.798 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.057 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.057 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.057 "name": "raid_bdev1", 00:12:39.057 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:39.057 "strip_size_kb": 64, 00:12:39.057 "state": "online", 00:12:39.057 "raid_level": "raid5f", 00:12:39.057 "superblock": true, 00:12:39.057 "num_base_bdevs": 3, 00:12:39.057 "num_base_bdevs_discovered": 3, 00:12:39.057 "num_base_bdevs_operational": 3, 00:12:39.057 "process": { 00:12:39.057 "type": "rebuild", 00:12:39.057 "target": "spare", 00:12:39.057 "progress": { 00:12:39.057 "blocks": 65536, 00:12:39.057 "percent": 51 00:12:39.057 } 00:12:39.057 }, 00:12:39.057 "base_bdevs_list": [ 00:12:39.057 { 00:12:39.057 "name": "spare", 00:12:39.057 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:39.057 "is_configured": true, 00:12:39.057 "data_offset": 2048, 00:12:39.057 "data_size": 63488 00:12:39.057 }, 00:12:39.057 { 00:12:39.057 "name": "BaseBdev2", 00:12:39.057 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:39.057 "is_configured": true, 00:12:39.057 "data_offset": 2048, 00:12:39.057 "data_size": 63488 00:12:39.057 }, 00:12:39.057 { 00:12:39.057 "name": "BaseBdev3", 00:12:39.057 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:39.057 "is_configured": true, 00:12:39.057 "data_offset": 2048, 00:12:39.057 "data_size": 63488 00:12:39.057 } 00:12:39.057 ] 00:12:39.057 }' 00:12:39.057 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.057 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.057 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.057 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.057 14:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.996 "name": "raid_bdev1", 00:12:39.996 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:39.996 "strip_size_kb": 64, 00:12:39.996 "state": "online", 00:12:39.996 "raid_level": "raid5f", 00:12:39.996 "superblock": true, 00:12:39.996 "num_base_bdevs": 3, 00:12:39.996 "num_base_bdevs_discovered": 3, 00:12:39.996 "num_base_bdevs_operational": 3, 00:12:39.996 "process": { 00:12:39.996 "type": "rebuild", 00:12:39.996 "target": "spare", 00:12:39.996 "progress": { 00:12:39.996 "blocks": 88064, 00:12:39.996 "percent": 69 00:12:39.996 } 00:12:39.996 }, 00:12:39.996 "base_bdevs_list": [ 00:12:39.996 { 00:12:39.996 "name": "spare", 00:12:39.996 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:39.996 "is_configured": true, 00:12:39.996 "data_offset": 2048, 00:12:39.996 "data_size": 63488 00:12:39.996 }, 00:12:39.996 { 00:12:39.996 "name": "BaseBdev2", 00:12:39.996 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:39.996 "is_configured": true, 00:12:39.996 "data_offset": 2048, 00:12:39.996 "data_size": 63488 00:12:39.996 }, 00:12:39.996 { 00:12:39.996 "name": "BaseBdev3", 00:12:39.996 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:39.996 "is_configured": true, 00:12:39.996 "data_offset": 2048, 00:12:39.996 "data_size": 63488 00:12:39.996 } 00:12:39.996 ] 00:12:39.996 }' 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.996 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.259 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.259 14:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:41.201 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.201 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.201 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.201 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.201 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.201 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.201 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.201 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.201 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.201 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.201 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.201 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.201 "name": "raid_bdev1", 00:12:41.201 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:41.201 "strip_size_kb": 64, 00:12:41.201 "state": "online", 00:12:41.202 "raid_level": "raid5f", 00:12:41.202 "superblock": true, 00:12:41.202 "num_base_bdevs": 3, 00:12:41.202 "num_base_bdevs_discovered": 3, 00:12:41.202 "num_base_bdevs_operational": 3, 00:12:41.202 "process": { 00:12:41.202 "type": "rebuild", 00:12:41.202 "target": "spare", 00:12:41.202 "progress": { 00:12:41.202 "blocks": 110592, 00:12:41.202 "percent": 87 00:12:41.202 } 00:12:41.202 }, 00:12:41.202 "base_bdevs_list": [ 00:12:41.202 { 00:12:41.202 "name": "spare", 00:12:41.202 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:41.202 "is_configured": true, 00:12:41.202 "data_offset": 2048, 00:12:41.202 "data_size": 63488 00:12:41.202 }, 00:12:41.202 { 00:12:41.202 "name": "BaseBdev2", 00:12:41.202 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:41.202 "is_configured": true, 00:12:41.202 "data_offset": 2048, 00:12:41.202 "data_size": 63488 00:12:41.202 }, 00:12:41.202 { 00:12:41.202 "name": "BaseBdev3", 00:12:41.202 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:41.202 "is_configured": true, 00:12:41.202 "data_offset": 2048, 00:12:41.202 "data_size": 63488 00:12:41.202 } 00:12:41.202 ] 00:12:41.202 }' 00:12:41.202 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.202 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.202 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.202 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.202 14:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:41.774 [2024-10-01 14:37:33.444036] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:41.774 [2024-10-01 14:37:33.444135] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:41.774 [2024-10-01 14:37:33.444257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.348 "name": "raid_bdev1", 00:12:42.348 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:42.348 "strip_size_kb": 64, 00:12:42.348 "state": "online", 00:12:42.348 "raid_level": "raid5f", 00:12:42.348 "superblock": true, 00:12:42.348 "num_base_bdevs": 3, 00:12:42.348 "num_base_bdevs_discovered": 3, 00:12:42.348 "num_base_bdevs_operational": 3, 00:12:42.348 "base_bdevs_list": [ 00:12:42.348 { 00:12:42.348 "name": "spare", 00:12:42.348 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:42.348 "is_configured": true, 00:12:42.348 "data_offset": 2048, 00:12:42.348 "data_size": 63488 00:12:42.348 }, 00:12:42.348 { 00:12:42.348 "name": "BaseBdev2", 00:12:42.348 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:42.348 "is_configured": true, 00:12:42.348 "data_offset": 2048, 00:12:42.348 "data_size": 63488 00:12:42.348 }, 00:12:42.348 { 00:12:42.348 "name": "BaseBdev3", 00:12:42.348 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:42.348 "is_configured": true, 00:12:42.348 "data_offset": 2048, 00:12:42.348 "data_size": 63488 00:12:42.348 } 00:12:42.348 ] 00:12:42.348 }' 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.348 "name": "raid_bdev1", 00:12:42.348 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:42.348 "strip_size_kb": 64, 00:12:42.348 "state": "online", 00:12:42.348 "raid_level": "raid5f", 00:12:42.348 "superblock": true, 00:12:42.348 "num_base_bdevs": 3, 00:12:42.348 "num_base_bdevs_discovered": 3, 00:12:42.348 "num_base_bdevs_operational": 3, 00:12:42.348 "base_bdevs_list": [ 00:12:42.348 { 00:12:42.348 "name": "spare", 00:12:42.348 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:42.348 "is_configured": true, 00:12:42.348 "data_offset": 2048, 00:12:42.348 "data_size": 63488 00:12:42.348 }, 00:12:42.348 { 00:12:42.348 "name": "BaseBdev2", 00:12:42.348 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:42.348 "is_configured": true, 00:12:42.348 "data_offset": 2048, 00:12:42.348 "data_size": 63488 00:12:42.348 }, 00:12:42.348 { 00:12:42.348 "name": "BaseBdev3", 00:12:42.348 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:42.348 "is_configured": true, 00:12:42.348 "data_offset": 2048, 00:12:42.348 "data_size": 63488 00:12:42.348 } 00:12:42.348 ] 00:12:42.348 }' 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.348 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:42.349 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.349 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.349 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.349 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.349 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.349 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.349 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.349 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.349 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.349 14:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.349 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.349 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.349 "name": "raid_bdev1", 00:12:42.349 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:42.349 "strip_size_kb": 64, 00:12:42.349 "state": "online", 00:12:42.349 "raid_level": "raid5f", 00:12:42.349 "superblock": true, 00:12:42.349 "num_base_bdevs": 3, 00:12:42.349 "num_base_bdevs_discovered": 3, 00:12:42.349 "num_base_bdevs_operational": 3, 00:12:42.349 "base_bdevs_list": [ 00:12:42.349 { 00:12:42.349 "name": "spare", 00:12:42.349 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:42.349 "is_configured": true, 00:12:42.349 "data_offset": 2048, 00:12:42.349 "data_size": 63488 00:12:42.349 }, 00:12:42.349 { 00:12:42.349 "name": "BaseBdev2", 00:12:42.349 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:42.349 "is_configured": true, 00:12:42.349 "data_offset": 2048, 00:12:42.349 "data_size": 63488 00:12:42.349 }, 00:12:42.349 { 00:12:42.349 "name": "BaseBdev3", 00:12:42.349 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:42.349 "is_configured": true, 00:12:42.349 "data_offset": 2048, 00:12:42.349 "data_size": 63488 00:12:42.349 } 00:12:42.349 ] 00:12:42.349 }' 00:12:42.349 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.349 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.922 [2024-10-01 14:37:34.338388] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:42.922 [2024-10-01 14:37:34.338432] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.922 [2024-10-01 14:37:34.338512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.922 [2024-10-01 14:37:34.338593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.922 [2024-10-01 14:37:34.338619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.922 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:42.922 /dev/nbd0 00:12:43.182 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:43.182 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:43.182 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:43.182 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:43.182 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:43.182 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:43.182 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:43.182 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:43.183 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:43.183 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:43.183 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.183 1+0 records in 00:12:43.183 1+0 records out 00:12:43.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325854 s, 12.6 MB/s 00:12:43.183 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.183 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:43.183 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.183 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:43.183 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:43.183 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.183 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:43.183 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:43.183 /dev/nbd1 00:12:43.183 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.442 1+0 records in 00:12:43.442 1+0 records out 00:12:43.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302996 s, 13.5 MB/s 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:43.442 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:43.443 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:43.443 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.443 14:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:43.743 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:43.743 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:43.743 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:43.743 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.743 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.743 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:43.743 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:43.743 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.743 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.743 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:44.033 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:44.033 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:44.033 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:44.033 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.033 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.033 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:44.033 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:44.033 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.033 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:44.033 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:44.033 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.034 [2024-10-01 14:37:35.493418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:44.034 [2024-10-01 14:37:35.493491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.034 [2024-10-01 14:37:35.493512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:44.034 [2024-10-01 14:37:35.493526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.034 [2024-10-01 14:37:35.495787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.034 [2024-10-01 14:37:35.495824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:44.034 [2024-10-01 14:37:35.495915] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:44.034 [2024-10-01 14:37:35.495968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.034 [2024-10-01 14:37:35.496104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.034 [2024-10-01 14:37:35.496205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.034 spare 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.034 [2024-10-01 14:37:35.596305] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:44.034 [2024-10-01 14:37:35.596356] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:44.034 [2024-10-01 14:37:35.596674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:12:44.034 [2024-10-01 14:37:35.600243] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:44.034 [2024-10-01 14:37:35.600268] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:44.034 [2024-10-01 14:37:35.600464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.034 "name": "raid_bdev1", 00:12:44.034 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:44.034 "strip_size_kb": 64, 00:12:44.034 "state": "online", 00:12:44.034 "raid_level": "raid5f", 00:12:44.034 "superblock": true, 00:12:44.034 "num_base_bdevs": 3, 00:12:44.034 "num_base_bdevs_discovered": 3, 00:12:44.034 "num_base_bdevs_operational": 3, 00:12:44.034 "base_bdevs_list": [ 00:12:44.034 { 00:12:44.034 "name": "spare", 00:12:44.034 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:44.034 "is_configured": true, 00:12:44.034 "data_offset": 2048, 00:12:44.034 "data_size": 63488 00:12:44.034 }, 00:12:44.034 { 00:12:44.034 "name": "BaseBdev2", 00:12:44.034 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:44.034 "is_configured": true, 00:12:44.034 "data_offset": 2048, 00:12:44.034 "data_size": 63488 00:12:44.034 }, 00:12:44.034 { 00:12:44.034 "name": "BaseBdev3", 00:12:44.034 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:44.034 "is_configured": true, 00:12:44.034 "data_offset": 2048, 00:12:44.034 "data_size": 63488 00:12:44.034 } 00:12:44.034 ] 00:12:44.034 }' 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.034 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.295 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:44.295 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.295 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:44.295 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:44.295 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.295 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.296 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.296 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.296 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.296 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.296 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.296 "name": "raid_bdev1", 00:12:44.296 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:44.296 "strip_size_kb": 64, 00:12:44.296 "state": "online", 00:12:44.296 "raid_level": "raid5f", 00:12:44.296 "superblock": true, 00:12:44.296 "num_base_bdevs": 3, 00:12:44.296 "num_base_bdevs_discovered": 3, 00:12:44.296 "num_base_bdevs_operational": 3, 00:12:44.296 "base_bdevs_list": [ 00:12:44.296 { 00:12:44.296 "name": "spare", 00:12:44.296 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:44.296 "is_configured": true, 00:12:44.296 "data_offset": 2048, 00:12:44.296 "data_size": 63488 00:12:44.296 }, 00:12:44.296 { 00:12:44.296 "name": "BaseBdev2", 00:12:44.296 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:44.296 "is_configured": true, 00:12:44.296 "data_offset": 2048, 00:12:44.296 "data_size": 63488 00:12:44.296 }, 00:12:44.296 { 00:12:44.296 "name": "BaseBdev3", 00:12:44.296 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:44.296 "is_configured": true, 00:12:44.296 "data_offset": 2048, 00:12:44.296 "data_size": 63488 00:12:44.296 } 00:12:44.296 ] 00:12:44.296 }' 00:12:44.296 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.554 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:44.555 14:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.555 [2024-10-01 14:37:36.048661] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.555 "name": "raid_bdev1", 00:12:44.555 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:44.555 "strip_size_kb": 64, 00:12:44.555 "state": "online", 00:12:44.555 "raid_level": "raid5f", 00:12:44.555 "superblock": true, 00:12:44.555 "num_base_bdevs": 3, 00:12:44.555 "num_base_bdevs_discovered": 2, 00:12:44.555 "num_base_bdevs_operational": 2, 00:12:44.555 "base_bdevs_list": [ 00:12:44.555 { 00:12:44.555 "name": null, 00:12:44.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.555 "is_configured": false, 00:12:44.555 "data_offset": 0, 00:12:44.555 "data_size": 63488 00:12:44.555 }, 00:12:44.555 { 00:12:44.555 "name": "BaseBdev2", 00:12:44.555 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:44.555 "is_configured": true, 00:12:44.555 "data_offset": 2048, 00:12:44.555 "data_size": 63488 00:12:44.555 }, 00:12:44.555 { 00:12:44.555 "name": "BaseBdev3", 00:12:44.555 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:44.555 "is_configured": true, 00:12:44.555 "data_offset": 2048, 00:12:44.555 "data_size": 63488 00:12:44.555 } 00:12:44.555 ] 00:12:44.555 }' 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.555 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.813 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:44.813 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.813 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.813 [2024-10-01 14:37:36.372771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.813 [2024-10-01 14:37:36.372946] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:44.813 [2024-10-01 14:37:36.372963] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:44.813 [2024-10-01 14:37:36.372999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.813 [2024-10-01 14:37:36.382332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:12:44.813 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.813 14:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:44.813 [2024-10-01 14:37:36.387582] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:45.751 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.751 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.751 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.751 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.751 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.751 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.751 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.751 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.751 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.751 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.751 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.751 "name": "raid_bdev1", 00:12:45.751 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:45.751 "strip_size_kb": 64, 00:12:45.751 "state": "online", 00:12:45.751 "raid_level": "raid5f", 00:12:45.751 "superblock": true, 00:12:45.751 "num_base_bdevs": 3, 00:12:45.751 "num_base_bdevs_discovered": 3, 00:12:45.751 "num_base_bdevs_operational": 3, 00:12:45.751 "process": { 00:12:45.751 "type": "rebuild", 00:12:45.751 "target": "spare", 00:12:45.751 "progress": { 00:12:45.751 "blocks": 18432, 00:12:45.751 "percent": 14 00:12:45.751 } 00:12:45.751 }, 00:12:45.751 "base_bdevs_list": [ 00:12:45.751 { 00:12:45.751 "name": "spare", 00:12:45.751 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:45.751 "is_configured": true, 00:12:45.751 "data_offset": 2048, 00:12:45.751 "data_size": 63488 00:12:45.751 }, 00:12:45.751 { 00:12:45.751 "name": "BaseBdev2", 00:12:45.751 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:45.751 "is_configured": true, 00:12:45.751 "data_offset": 2048, 00:12:45.751 "data_size": 63488 00:12:45.751 }, 00:12:45.751 { 00:12:45.751 "name": "BaseBdev3", 00:12:45.751 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:45.751 "is_configured": true, 00:12:45.751 "data_offset": 2048, 00:12:45.751 "data_size": 63488 00:12:45.751 } 00:12:45.751 ] 00:12:45.751 }' 00:12:45.751 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.013 [2024-10-01 14:37:37.485000] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.013 [2024-10-01 14:37:37.498148] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:46.013 [2024-10-01 14:37:37.498220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.013 [2024-10-01 14:37:37.498235] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.013 [2024-10-01 14:37:37.498244] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.013 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.014 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.014 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.014 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.014 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.014 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.014 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.014 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.014 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.014 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.014 "name": "raid_bdev1", 00:12:46.014 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:46.014 "strip_size_kb": 64, 00:12:46.014 "state": "online", 00:12:46.014 "raid_level": "raid5f", 00:12:46.014 "superblock": true, 00:12:46.014 "num_base_bdevs": 3, 00:12:46.014 "num_base_bdevs_discovered": 2, 00:12:46.014 "num_base_bdevs_operational": 2, 00:12:46.014 "base_bdevs_list": [ 00:12:46.014 { 00:12:46.014 "name": null, 00:12:46.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.014 "is_configured": false, 00:12:46.014 "data_offset": 0, 00:12:46.014 "data_size": 63488 00:12:46.014 }, 00:12:46.014 { 00:12:46.014 "name": "BaseBdev2", 00:12:46.014 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:46.014 "is_configured": true, 00:12:46.014 "data_offset": 2048, 00:12:46.014 "data_size": 63488 00:12:46.014 }, 00:12:46.014 { 00:12:46.014 "name": "BaseBdev3", 00:12:46.014 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:46.014 "is_configured": true, 00:12:46.014 "data_offset": 2048, 00:12:46.014 "data_size": 63488 00:12:46.014 } 00:12:46.014 ] 00:12:46.014 }' 00:12:46.014 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.014 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.275 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:46.275 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.275 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.275 [2024-10-01 14:37:37.828366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:46.275 [2024-10-01 14:37:37.828442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.275 [2024-10-01 14:37:37.828463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:12:46.275 [2024-10-01 14:37:37.828478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.275 [2024-10-01 14:37:37.828938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.275 [2024-10-01 14:37:37.828964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:46.275 [2024-10-01 14:37:37.829049] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:46.275 [2024-10-01 14:37:37.829064] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:46.275 [2024-10-01 14:37:37.829075] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:46.275 [2024-10-01 14:37:37.829097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.275 [2024-10-01 14:37:37.838246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:12:46.275 spare 00:12:46.275 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.275 14:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:46.275 [2024-10-01 14:37:37.843517] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:47.219 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.219 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.219 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.219 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.219 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.219 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.219 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.219 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.219 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.219 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.219 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.219 "name": "raid_bdev1", 00:12:47.219 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:47.219 "strip_size_kb": 64, 00:12:47.219 "state": "online", 00:12:47.219 "raid_level": "raid5f", 00:12:47.219 "superblock": true, 00:12:47.219 "num_base_bdevs": 3, 00:12:47.219 "num_base_bdevs_discovered": 3, 00:12:47.219 "num_base_bdevs_operational": 3, 00:12:47.219 "process": { 00:12:47.219 "type": "rebuild", 00:12:47.219 "target": "spare", 00:12:47.219 "progress": { 00:12:47.219 "blocks": 18432, 00:12:47.219 "percent": 14 00:12:47.219 } 00:12:47.219 }, 00:12:47.219 "base_bdevs_list": [ 00:12:47.219 { 00:12:47.219 "name": "spare", 00:12:47.219 "uuid": "97fdbf46-3a7d-5aab-9bc1-13af302e7251", 00:12:47.219 "is_configured": true, 00:12:47.219 "data_offset": 2048, 00:12:47.219 "data_size": 63488 00:12:47.219 }, 00:12:47.219 { 00:12:47.219 "name": "BaseBdev2", 00:12:47.219 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:47.219 "is_configured": true, 00:12:47.219 "data_offset": 2048, 00:12:47.219 "data_size": 63488 00:12:47.219 }, 00:12:47.219 { 00:12:47.219 "name": "BaseBdev3", 00:12:47.219 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:47.219 "is_configured": true, 00:12:47.219 "data_offset": 2048, 00:12:47.219 "data_size": 63488 00:12:47.219 } 00:12:47.219 ] 00:12:47.219 }' 00:12:47.219 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.479 [2024-10-01 14:37:38.952951] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.479 [2024-10-01 14:37:38.954138] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:47.479 [2024-10-01 14:37:38.954189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.479 [2024-10-01 14:37:38.954206] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.479 [2024-10-01 14:37:38.954214] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.479 14:37:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.479 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.479 "name": "raid_bdev1", 00:12:47.479 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:47.479 "strip_size_kb": 64, 00:12:47.479 "state": "online", 00:12:47.479 "raid_level": "raid5f", 00:12:47.479 "superblock": true, 00:12:47.479 "num_base_bdevs": 3, 00:12:47.479 "num_base_bdevs_discovered": 2, 00:12:47.479 "num_base_bdevs_operational": 2, 00:12:47.479 "base_bdevs_list": [ 00:12:47.479 { 00:12:47.479 "name": null, 00:12:47.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.479 "is_configured": false, 00:12:47.479 "data_offset": 0, 00:12:47.479 "data_size": 63488 00:12:47.479 }, 00:12:47.479 { 00:12:47.479 "name": "BaseBdev2", 00:12:47.479 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:47.479 "is_configured": true, 00:12:47.479 "data_offset": 2048, 00:12:47.479 "data_size": 63488 00:12:47.479 }, 00:12:47.479 { 00:12:47.479 "name": "BaseBdev3", 00:12:47.479 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:47.479 "is_configured": true, 00:12:47.479 "data_offset": 2048, 00:12:47.479 "data_size": 63488 00:12:47.479 } 00:12:47.479 ] 00:12:47.479 }' 00:12:47.479 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.479 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.757 "name": "raid_bdev1", 00:12:47.757 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:47.757 "strip_size_kb": 64, 00:12:47.757 "state": "online", 00:12:47.757 "raid_level": "raid5f", 00:12:47.757 "superblock": true, 00:12:47.757 "num_base_bdevs": 3, 00:12:47.757 "num_base_bdevs_discovered": 2, 00:12:47.757 "num_base_bdevs_operational": 2, 00:12:47.757 "base_bdevs_list": [ 00:12:47.757 { 00:12:47.757 "name": null, 00:12:47.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.757 "is_configured": false, 00:12:47.757 "data_offset": 0, 00:12:47.757 "data_size": 63488 00:12:47.757 }, 00:12:47.757 { 00:12:47.757 "name": "BaseBdev2", 00:12:47.757 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:47.757 "is_configured": true, 00:12:47.757 "data_offset": 2048, 00:12:47.757 "data_size": 63488 00:12:47.757 }, 00:12:47.757 { 00:12:47.757 "name": "BaseBdev3", 00:12:47.757 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:47.757 "is_configured": true, 00:12:47.757 "data_offset": 2048, 00:12:47.757 "data_size": 63488 00:12:47.757 } 00:12:47.757 ] 00:12:47.757 }' 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.757 [2024-10-01 14:37:39.400360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:47.757 [2024-10-01 14:37:39.400424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.757 [2024-10-01 14:37:39.400446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:47.757 [2024-10-01 14:37:39.400456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.757 [2024-10-01 14:37:39.400913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.757 [2024-10-01 14:37:39.400930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:47.757 [2024-10-01 14:37:39.401003] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:47.757 [2024-10-01 14:37:39.401015] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:47.757 [2024-10-01 14:37:39.401030] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:47.757 [2024-10-01 14:37:39.401040] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:47.757 BaseBdev1 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.757 14:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:49.136 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:49.136 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.136 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.136 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.137 "name": "raid_bdev1", 00:12:49.137 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:49.137 "strip_size_kb": 64, 00:12:49.137 "state": "online", 00:12:49.137 "raid_level": "raid5f", 00:12:49.137 "superblock": true, 00:12:49.137 "num_base_bdevs": 3, 00:12:49.137 "num_base_bdevs_discovered": 2, 00:12:49.137 "num_base_bdevs_operational": 2, 00:12:49.137 "base_bdevs_list": [ 00:12:49.137 { 00:12:49.137 "name": null, 00:12:49.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.137 "is_configured": false, 00:12:49.137 "data_offset": 0, 00:12:49.137 "data_size": 63488 00:12:49.137 }, 00:12:49.137 { 00:12:49.137 "name": "BaseBdev2", 00:12:49.137 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:49.137 "is_configured": true, 00:12:49.137 "data_offset": 2048, 00:12:49.137 "data_size": 63488 00:12:49.137 }, 00:12:49.137 { 00:12:49.137 "name": "BaseBdev3", 00:12:49.137 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:49.137 "is_configured": true, 00:12:49.137 "data_offset": 2048, 00:12:49.137 "data_size": 63488 00:12:49.137 } 00:12:49.137 ] 00:12:49.137 }' 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.137 "name": "raid_bdev1", 00:12:49.137 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:49.137 "strip_size_kb": 64, 00:12:49.137 "state": "online", 00:12:49.137 "raid_level": "raid5f", 00:12:49.137 "superblock": true, 00:12:49.137 "num_base_bdevs": 3, 00:12:49.137 "num_base_bdevs_discovered": 2, 00:12:49.137 "num_base_bdevs_operational": 2, 00:12:49.137 "base_bdevs_list": [ 00:12:49.137 { 00:12:49.137 "name": null, 00:12:49.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.137 "is_configured": false, 00:12:49.137 "data_offset": 0, 00:12:49.137 "data_size": 63488 00:12:49.137 }, 00:12:49.137 { 00:12:49.137 "name": "BaseBdev2", 00:12:49.137 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:49.137 "is_configured": true, 00:12:49.137 "data_offset": 2048, 00:12:49.137 "data_size": 63488 00:12:49.137 }, 00:12:49.137 { 00:12:49.137 "name": "BaseBdev3", 00:12:49.137 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:49.137 "is_configured": true, 00:12:49.137 "data_offset": 2048, 00:12:49.137 "data_size": 63488 00:12:49.137 } 00:12:49.137 ] 00:12:49.137 }' 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.137 [2024-10-01 14:37:40.792777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.137 [2024-10-01 14:37:40.792934] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:49.137 [2024-10-01 14:37:40.792949] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:49.137 request: 00:12:49.137 { 00:12:49.137 "base_bdev": "BaseBdev1", 00:12:49.137 "raid_bdev": "raid_bdev1", 00:12:49.137 "method": "bdev_raid_add_base_bdev", 00:12:49.137 "req_id": 1 00:12:49.137 } 00:12:49.137 Got JSON-RPC error response 00:12:49.137 response: 00:12:49.137 { 00:12:49.137 "code": -22, 00:12:49.137 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:49.137 } 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.137 14:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:50.119 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:12:50.119 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.381 "name": "raid_bdev1", 00:12:50.381 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:50.381 "strip_size_kb": 64, 00:12:50.381 "state": "online", 00:12:50.381 "raid_level": "raid5f", 00:12:50.381 "superblock": true, 00:12:50.381 "num_base_bdevs": 3, 00:12:50.381 "num_base_bdevs_discovered": 2, 00:12:50.381 "num_base_bdevs_operational": 2, 00:12:50.381 "base_bdevs_list": [ 00:12:50.381 { 00:12:50.381 "name": null, 00:12:50.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.381 "is_configured": false, 00:12:50.381 "data_offset": 0, 00:12:50.381 "data_size": 63488 00:12:50.381 }, 00:12:50.381 { 00:12:50.381 "name": "BaseBdev2", 00:12:50.381 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:50.381 "is_configured": true, 00:12:50.381 "data_offset": 2048, 00:12:50.381 "data_size": 63488 00:12:50.381 }, 00:12:50.381 { 00:12:50.381 "name": "BaseBdev3", 00:12:50.381 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:50.381 "is_configured": true, 00:12:50.381 "data_offset": 2048, 00:12:50.381 "data_size": 63488 00:12:50.381 } 00:12:50.381 ] 00:12:50.381 }' 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.381 14:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.643 "name": "raid_bdev1", 00:12:50.643 "uuid": "6490a049-216c-4a26-b809-e639aa3489e1", 00:12:50.643 "strip_size_kb": 64, 00:12:50.643 "state": "online", 00:12:50.643 "raid_level": "raid5f", 00:12:50.643 "superblock": true, 00:12:50.643 "num_base_bdevs": 3, 00:12:50.643 "num_base_bdevs_discovered": 2, 00:12:50.643 "num_base_bdevs_operational": 2, 00:12:50.643 "base_bdevs_list": [ 00:12:50.643 { 00:12:50.643 "name": null, 00:12:50.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.643 "is_configured": false, 00:12:50.643 "data_offset": 0, 00:12:50.643 "data_size": 63488 00:12:50.643 }, 00:12:50.643 { 00:12:50.643 "name": "BaseBdev2", 00:12:50.643 "uuid": "561f3bd4-a2f6-581c-ab2c-e2c201cb88e1", 00:12:50.643 "is_configured": true, 00:12:50.643 "data_offset": 2048, 00:12:50.643 "data_size": 63488 00:12:50.643 }, 00:12:50.643 { 00:12:50.643 "name": "BaseBdev3", 00:12:50.643 "uuid": "4f0bdffe-1053-5602-abb1-1576eb23c750", 00:12:50.643 "is_configured": true, 00:12:50.643 "data_offset": 2048, 00:12:50.643 "data_size": 63488 00:12:50.643 } 00:12:50.643 ] 00:12:50.643 }' 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 79985 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79985 ']' 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 79985 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79985 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.643 killing process with pid 79985 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79985' 00:12:50.643 Received shutdown signal, test time was about 60.000000 seconds 00:12:50.643 00:12:50.643 Latency(us) 00:12:50.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.643 =================================================================================================================== 00:12:50.643 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:50.643 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 79985 00:12:50.644 [2024-10-01 14:37:42.244808] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:50.644 14:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 79985 00:12:50.644 [2024-10-01 14:37:42.244927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.644 [2024-10-01 14:37:42.244994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.644 [2024-10-01 14:37:42.245006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:50.903 [2024-10-01 14:37:42.512617] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:51.882 14:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:51.882 00:12:51.882 real 0m20.747s 00:12:51.882 user 0m25.770s 00:12:51.882 sys 0m2.087s 00:12:51.882 14:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.882 14:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.882 ************************************ 00:12:51.882 END TEST raid5f_rebuild_test_sb 00:12:51.882 ************************************ 00:12:51.882 14:37:43 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:12:51.882 14:37:43 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:12:51.882 14:37:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:51.882 14:37:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.882 14:37:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:51.882 ************************************ 00:12:51.882 START TEST raid5f_state_function_test 00:12:51.882 ************************************ 00:12:51.882 14:37:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:12:51.882 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:51.882 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:51.882 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:51.882 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:51.882 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:51.882 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:51.882 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:51.882 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:51.882 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:51.882 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:51.882 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:51.883 Process raid pid: 80709 00:12:51.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80709 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80709' 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80709 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80709 ']' 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.883 14:37:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.883 [2024-10-01 14:37:43.545719] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:12:51.883 [2024-10-01 14:37:43.545874] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.142 [2024-10-01 14:37:43.696846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.402 [2024-10-01 14:37:43.891614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.402 [2024-10-01 14:37:44.031171] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.402 [2024-10-01 14:37:44.031212] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.739 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.739 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:52.739 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:52.739 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.739 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.999 [2024-10-01 14:37:44.426276] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:53.000 [2024-10-01 14:37:44.426529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:53.000 [2024-10-01 14:37:44.426548] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:53.000 [2024-10-01 14:37:44.426558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:53.000 [2024-10-01 14:37:44.426565] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:53.000 [2024-10-01 14:37:44.426574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:53.000 [2024-10-01 14:37:44.426581] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:53.000 [2024-10-01 14:37:44.426593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.000 "name": "Existed_Raid", 00:12:53.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.000 "strip_size_kb": 64, 00:12:53.000 "state": "configuring", 00:12:53.000 "raid_level": "raid5f", 00:12:53.000 "superblock": false, 00:12:53.000 "num_base_bdevs": 4, 00:12:53.000 "num_base_bdevs_discovered": 0, 00:12:53.000 "num_base_bdevs_operational": 4, 00:12:53.000 "base_bdevs_list": [ 00:12:53.000 { 00:12:53.000 "name": "BaseBdev1", 00:12:53.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.000 "is_configured": false, 00:12:53.000 "data_offset": 0, 00:12:53.000 "data_size": 0 00:12:53.000 }, 00:12:53.000 { 00:12:53.000 "name": "BaseBdev2", 00:12:53.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.000 "is_configured": false, 00:12:53.000 "data_offset": 0, 00:12:53.000 "data_size": 0 00:12:53.000 }, 00:12:53.000 { 00:12:53.000 "name": "BaseBdev3", 00:12:53.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.000 "is_configured": false, 00:12:53.000 "data_offset": 0, 00:12:53.000 "data_size": 0 00:12:53.000 }, 00:12:53.000 { 00:12:53.000 "name": "BaseBdev4", 00:12:53.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.000 "is_configured": false, 00:12:53.000 "data_offset": 0, 00:12:53.000 "data_size": 0 00:12:53.000 } 00:12:53.000 ] 00:12:53.000 }' 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.000 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.259 [2024-10-01 14:37:44.766270] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:53.259 [2024-10-01 14:37:44.766324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.259 [2024-10-01 14:37:44.774287] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:53.259 [2024-10-01 14:37:44.774486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:53.259 [2024-10-01 14:37:44.774547] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:53.259 [2024-10-01 14:37:44.774577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:53.259 [2024-10-01 14:37:44.774596] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:53.259 [2024-10-01 14:37:44.774617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:53.259 [2024-10-01 14:37:44.774635] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:53.259 [2024-10-01 14:37:44.774656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.259 [2024-10-01 14:37:44.824562] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.259 BaseBdev1 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.259 [ 00:12:53.259 { 00:12:53.259 "name": "BaseBdev1", 00:12:53.259 "aliases": [ 00:12:53.259 "0bcf704c-040c-4580-96f2-b7080855da9e" 00:12:53.259 ], 00:12:53.259 "product_name": "Malloc disk", 00:12:53.259 "block_size": 512, 00:12:53.259 "num_blocks": 65536, 00:12:53.259 "uuid": "0bcf704c-040c-4580-96f2-b7080855da9e", 00:12:53.259 "assigned_rate_limits": { 00:12:53.259 "rw_ios_per_sec": 0, 00:12:53.259 "rw_mbytes_per_sec": 0, 00:12:53.259 "r_mbytes_per_sec": 0, 00:12:53.259 "w_mbytes_per_sec": 0 00:12:53.259 }, 00:12:53.259 "claimed": true, 00:12:53.259 "claim_type": "exclusive_write", 00:12:53.259 "zoned": false, 00:12:53.259 "supported_io_types": { 00:12:53.259 "read": true, 00:12:53.259 "write": true, 00:12:53.259 "unmap": true, 00:12:53.259 "flush": true, 00:12:53.259 "reset": true, 00:12:53.259 "nvme_admin": false, 00:12:53.259 "nvme_io": false, 00:12:53.259 "nvme_io_md": false, 00:12:53.259 "write_zeroes": true, 00:12:53.259 "zcopy": true, 00:12:53.259 "get_zone_info": false, 00:12:53.259 "zone_management": false, 00:12:53.259 "zone_append": false, 00:12:53.259 "compare": false, 00:12:53.259 "compare_and_write": false, 00:12:53.259 "abort": true, 00:12:53.259 "seek_hole": false, 00:12:53.259 "seek_data": false, 00:12:53.259 "copy": true, 00:12:53.259 "nvme_iov_md": false 00:12:53.259 }, 00:12:53.259 "memory_domains": [ 00:12:53.259 { 00:12:53.259 "dma_device_id": "system", 00:12:53.259 "dma_device_type": 1 00:12:53.259 }, 00:12:53.259 { 00:12:53.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.259 "dma_device_type": 2 00:12:53.259 } 00:12:53.259 ], 00:12:53.259 "driver_specific": {} 00:12:53.259 } 00:12:53.259 ] 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:53.259 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.260 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.260 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.260 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.260 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.260 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.260 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.260 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.260 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.260 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.260 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.260 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.260 "name": "Existed_Raid", 00:12:53.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.260 "strip_size_kb": 64, 00:12:53.260 "state": "configuring", 00:12:53.260 "raid_level": "raid5f", 00:12:53.260 "superblock": false, 00:12:53.260 "num_base_bdevs": 4, 00:12:53.260 "num_base_bdevs_discovered": 1, 00:12:53.260 "num_base_bdevs_operational": 4, 00:12:53.260 "base_bdevs_list": [ 00:12:53.260 { 00:12:53.260 "name": "BaseBdev1", 00:12:53.260 "uuid": "0bcf704c-040c-4580-96f2-b7080855da9e", 00:12:53.260 "is_configured": true, 00:12:53.260 "data_offset": 0, 00:12:53.260 "data_size": 65536 00:12:53.260 }, 00:12:53.260 { 00:12:53.260 "name": "BaseBdev2", 00:12:53.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.260 "is_configured": false, 00:12:53.260 "data_offset": 0, 00:12:53.260 "data_size": 0 00:12:53.260 }, 00:12:53.260 { 00:12:53.260 "name": "BaseBdev3", 00:12:53.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.260 "is_configured": false, 00:12:53.260 "data_offset": 0, 00:12:53.260 "data_size": 0 00:12:53.260 }, 00:12:53.260 { 00:12:53.260 "name": "BaseBdev4", 00:12:53.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.260 "is_configured": false, 00:12:53.260 "data_offset": 0, 00:12:53.260 "data_size": 0 00:12:53.260 } 00:12:53.260 ] 00:12:53.260 }' 00:12:53.260 14:37:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.260 14:37:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.519 [2024-10-01 14:37:45.188735] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:53.519 [2024-10-01 14:37:45.188946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.519 [2024-10-01 14:37:45.196774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.519 [2024-10-01 14:37:45.198856] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:53.519 [2024-10-01 14:37:45.198983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:53.519 [2024-10-01 14:37:45.199042] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:53.519 [2024-10-01 14:37:45.199074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:53.519 [2024-10-01 14:37:45.199096] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:53.519 [2024-10-01 14:37:45.199119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.519 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.779 "name": "Existed_Raid", 00:12:53.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.779 "strip_size_kb": 64, 00:12:53.779 "state": "configuring", 00:12:53.779 "raid_level": "raid5f", 00:12:53.779 "superblock": false, 00:12:53.779 "num_base_bdevs": 4, 00:12:53.779 "num_base_bdevs_discovered": 1, 00:12:53.779 "num_base_bdevs_operational": 4, 00:12:53.779 "base_bdevs_list": [ 00:12:53.779 { 00:12:53.779 "name": "BaseBdev1", 00:12:53.779 "uuid": "0bcf704c-040c-4580-96f2-b7080855da9e", 00:12:53.779 "is_configured": true, 00:12:53.779 "data_offset": 0, 00:12:53.779 "data_size": 65536 00:12:53.779 }, 00:12:53.779 { 00:12:53.779 "name": "BaseBdev2", 00:12:53.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.779 "is_configured": false, 00:12:53.779 "data_offset": 0, 00:12:53.779 "data_size": 0 00:12:53.779 }, 00:12:53.779 { 00:12:53.779 "name": "BaseBdev3", 00:12:53.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.779 "is_configured": false, 00:12:53.779 "data_offset": 0, 00:12:53.779 "data_size": 0 00:12:53.779 }, 00:12:53.779 { 00:12:53.779 "name": "BaseBdev4", 00:12:53.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.779 "is_configured": false, 00:12:53.779 "data_offset": 0, 00:12:53.779 "data_size": 0 00:12:53.779 } 00:12:53.779 ] 00:12:53.779 }' 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.779 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.039 BaseBdev2 00:12:54.039 [2024-10-01 14:37:45.535841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.039 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.039 [ 00:12:54.039 { 00:12:54.039 "name": "BaseBdev2", 00:12:54.039 "aliases": [ 00:12:54.039 "5b6c820f-feb0-4c8d-b884-10209bf603f5" 00:12:54.039 ], 00:12:54.039 "product_name": "Malloc disk", 00:12:54.039 "block_size": 512, 00:12:54.039 "num_blocks": 65536, 00:12:54.040 "uuid": "5b6c820f-feb0-4c8d-b884-10209bf603f5", 00:12:54.040 "assigned_rate_limits": { 00:12:54.040 "rw_ios_per_sec": 0, 00:12:54.040 "rw_mbytes_per_sec": 0, 00:12:54.040 "r_mbytes_per_sec": 0, 00:12:54.040 "w_mbytes_per_sec": 0 00:12:54.040 }, 00:12:54.040 "claimed": true, 00:12:54.040 "claim_type": "exclusive_write", 00:12:54.040 "zoned": false, 00:12:54.040 "supported_io_types": { 00:12:54.040 "read": true, 00:12:54.040 "write": true, 00:12:54.040 "unmap": true, 00:12:54.040 "flush": true, 00:12:54.040 "reset": true, 00:12:54.040 "nvme_admin": false, 00:12:54.040 "nvme_io": false, 00:12:54.040 "nvme_io_md": false, 00:12:54.040 "write_zeroes": true, 00:12:54.040 "zcopy": true, 00:12:54.040 "get_zone_info": false, 00:12:54.040 "zone_management": false, 00:12:54.040 "zone_append": false, 00:12:54.040 "compare": false, 00:12:54.040 "compare_and_write": false, 00:12:54.040 "abort": true, 00:12:54.040 "seek_hole": false, 00:12:54.040 "seek_data": false, 00:12:54.040 "copy": true, 00:12:54.040 "nvme_iov_md": false 00:12:54.040 }, 00:12:54.040 "memory_domains": [ 00:12:54.040 { 00:12:54.040 "dma_device_id": "system", 00:12:54.040 "dma_device_type": 1 00:12:54.040 }, 00:12:54.040 { 00:12:54.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.040 "dma_device_type": 2 00:12:54.040 } 00:12:54.040 ], 00:12:54.040 "driver_specific": {} 00:12:54.040 } 00:12:54.040 ] 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.040 "name": "Existed_Raid", 00:12:54.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.040 "strip_size_kb": 64, 00:12:54.040 "state": "configuring", 00:12:54.040 "raid_level": "raid5f", 00:12:54.040 "superblock": false, 00:12:54.040 "num_base_bdevs": 4, 00:12:54.040 "num_base_bdevs_discovered": 2, 00:12:54.040 "num_base_bdevs_operational": 4, 00:12:54.040 "base_bdevs_list": [ 00:12:54.040 { 00:12:54.040 "name": "BaseBdev1", 00:12:54.040 "uuid": "0bcf704c-040c-4580-96f2-b7080855da9e", 00:12:54.040 "is_configured": true, 00:12:54.040 "data_offset": 0, 00:12:54.040 "data_size": 65536 00:12:54.040 }, 00:12:54.040 { 00:12:54.040 "name": "BaseBdev2", 00:12:54.040 "uuid": "5b6c820f-feb0-4c8d-b884-10209bf603f5", 00:12:54.040 "is_configured": true, 00:12:54.040 "data_offset": 0, 00:12:54.040 "data_size": 65536 00:12:54.040 }, 00:12:54.040 { 00:12:54.040 "name": "BaseBdev3", 00:12:54.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.040 "is_configured": false, 00:12:54.040 "data_offset": 0, 00:12:54.040 "data_size": 0 00:12:54.040 }, 00:12:54.040 { 00:12:54.040 "name": "BaseBdev4", 00:12:54.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.040 "is_configured": false, 00:12:54.040 "data_offset": 0, 00:12:54.040 "data_size": 0 00:12:54.040 } 00:12:54.040 ] 00:12:54.040 }' 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.040 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.301 [2024-10-01 14:37:45.915053] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:54.301 BaseBdev3 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.301 [ 00:12:54.301 { 00:12:54.301 "name": "BaseBdev3", 00:12:54.301 "aliases": [ 00:12:54.301 "078c834f-4a23-4d2b-86d6-737f15a003ed" 00:12:54.301 ], 00:12:54.301 "product_name": "Malloc disk", 00:12:54.301 "block_size": 512, 00:12:54.301 "num_blocks": 65536, 00:12:54.301 "uuid": "078c834f-4a23-4d2b-86d6-737f15a003ed", 00:12:54.301 "assigned_rate_limits": { 00:12:54.301 "rw_ios_per_sec": 0, 00:12:54.301 "rw_mbytes_per_sec": 0, 00:12:54.301 "r_mbytes_per_sec": 0, 00:12:54.301 "w_mbytes_per_sec": 0 00:12:54.301 }, 00:12:54.301 "claimed": true, 00:12:54.301 "claim_type": "exclusive_write", 00:12:54.301 "zoned": false, 00:12:54.301 "supported_io_types": { 00:12:54.301 "read": true, 00:12:54.301 "write": true, 00:12:54.301 "unmap": true, 00:12:54.301 "flush": true, 00:12:54.301 "reset": true, 00:12:54.301 "nvme_admin": false, 00:12:54.301 "nvme_io": false, 00:12:54.301 "nvme_io_md": false, 00:12:54.301 "write_zeroes": true, 00:12:54.301 "zcopy": true, 00:12:54.301 "get_zone_info": false, 00:12:54.301 "zone_management": false, 00:12:54.301 "zone_append": false, 00:12:54.301 "compare": false, 00:12:54.301 "compare_and_write": false, 00:12:54.301 "abort": true, 00:12:54.301 "seek_hole": false, 00:12:54.301 "seek_data": false, 00:12:54.301 "copy": true, 00:12:54.301 "nvme_iov_md": false 00:12:54.301 }, 00:12:54.301 "memory_domains": [ 00:12:54.301 { 00:12:54.301 "dma_device_id": "system", 00:12:54.301 "dma_device_type": 1 00:12:54.301 }, 00:12:54.301 { 00:12:54.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.301 "dma_device_type": 2 00:12:54.301 } 00:12:54.301 ], 00:12:54.301 "driver_specific": {} 00:12:54.301 } 00:12:54.301 ] 00:12:54.301 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.302 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.562 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.562 "name": "Existed_Raid", 00:12:54.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.562 "strip_size_kb": 64, 00:12:54.562 "state": "configuring", 00:12:54.562 "raid_level": "raid5f", 00:12:54.562 "superblock": false, 00:12:54.562 "num_base_bdevs": 4, 00:12:54.562 "num_base_bdevs_discovered": 3, 00:12:54.562 "num_base_bdevs_operational": 4, 00:12:54.562 "base_bdevs_list": [ 00:12:54.562 { 00:12:54.562 "name": "BaseBdev1", 00:12:54.562 "uuid": "0bcf704c-040c-4580-96f2-b7080855da9e", 00:12:54.562 "is_configured": true, 00:12:54.562 "data_offset": 0, 00:12:54.562 "data_size": 65536 00:12:54.562 }, 00:12:54.562 { 00:12:54.562 "name": "BaseBdev2", 00:12:54.562 "uuid": "5b6c820f-feb0-4c8d-b884-10209bf603f5", 00:12:54.562 "is_configured": true, 00:12:54.562 "data_offset": 0, 00:12:54.562 "data_size": 65536 00:12:54.562 }, 00:12:54.562 { 00:12:54.562 "name": "BaseBdev3", 00:12:54.562 "uuid": "078c834f-4a23-4d2b-86d6-737f15a003ed", 00:12:54.562 "is_configured": true, 00:12:54.562 "data_offset": 0, 00:12:54.562 "data_size": 65536 00:12:54.562 }, 00:12:54.562 { 00:12:54.562 "name": "BaseBdev4", 00:12:54.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.562 "is_configured": false, 00:12:54.562 "data_offset": 0, 00:12:54.562 "data_size": 0 00:12:54.562 } 00:12:54.562 ] 00:12:54.562 }' 00:12:54.562 14:37:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.562 14:37:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.824 [2024-10-01 14:37:46.322074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:54.824 [2024-10-01 14:37:46.322130] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:54.824 [2024-10-01 14:37:46.322141] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:54.824 [2024-10-01 14:37:46.322394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:54.824 [2024-10-01 14:37:46.327366] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:54.824 [2024-10-01 14:37:46.327389] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:54.824 [2024-10-01 14:37:46.327631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.824 BaseBdev4 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.824 [ 00:12:54.824 { 00:12:54.824 "name": "BaseBdev4", 00:12:54.824 "aliases": [ 00:12:54.824 "fbd27255-ac27-4c80-aad6-ae7500474304" 00:12:54.824 ], 00:12:54.824 "product_name": "Malloc disk", 00:12:54.824 "block_size": 512, 00:12:54.824 "num_blocks": 65536, 00:12:54.824 "uuid": "fbd27255-ac27-4c80-aad6-ae7500474304", 00:12:54.824 "assigned_rate_limits": { 00:12:54.824 "rw_ios_per_sec": 0, 00:12:54.824 "rw_mbytes_per_sec": 0, 00:12:54.824 "r_mbytes_per_sec": 0, 00:12:54.824 "w_mbytes_per_sec": 0 00:12:54.824 }, 00:12:54.824 "claimed": true, 00:12:54.824 "claim_type": "exclusive_write", 00:12:54.824 "zoned": false, 00:12:54.824 "supported_io_types": { 00:12:54.824 "read": true, 00:12:54.824 "write": true, 00:12:54.824 "unmap": true, 00:12:54.824 "flush": true, 00:12:54.824 "reset": true, 00:12:54.824 "nvme_admin": false, 00:12:54.824 "nvme_io": false, 00:12:54.824 "nvme_io_md": false, 00:12:54.824 "write_zeroes": true, 00:12:54.824 "zcopy": true, 00:12:54.824 "get_zone_info": false, 00:12:54.824 "zone_management": false, 00:12:54.824 "zone_append": false, 00:12:54.824 "compare": false, 00:12:54.824 "compare_and_write": false, 00:12:54.824 "abort": true, 00:12:54.824 "seek_hole": false, 00:12:54.824 "seek_data": false, 00:12:54.824 "copy": true, 00:12:54.824 "nvme_iov_md": false 00:12:54.824 }, 00:12:54.824 "memory_domains": [ 00:12:54.824 { 00:12:54.824 "dma_device_id": "system", 00:12:54.824 "dma_device_type": 1 00:12:54.824 }, 00:12:54.824 { 00:12:54.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.824 "dma_device_type": 2 00:12:54.824 } 00:12:54.824 ], 00:12:54.824 "driver_specific": {} 00:12:54.824 } 00:12:54.824 ] 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.824 "name": "Existed_Raid", 00:12:54.824 "uuid": "52519fa4-56a4-4898-ad5d-be92121e984a", 00:12:54.824 "strip_size_kb": 64, 00:12:54.824 "state": "online", 00:12:54.824 "raid_level": "raid5f", 00:12:54.824 "superblock": false, 00:12:54.824 "num_base_bdevs": 4, 00:12:54.824 "num_base_bdevs_discovered": 4, 00:12:54.824 "num_base_bdevs_operational": 4, 00:12:54.824 "base_bdevs_list": [ 00:12:54.824 { 00:12:54.824 "name": "BaseBdev1", 00:12:54.824 "uuid": "0bcf704c-040c-4580-96f2-b7080855da9e", 00:12:54.824 "is_configured": true, 00:12:54.824 "data_offset": 0, 00:12:54.824 "data_size": 65536 00:12:54.824 }, 00:12:54.824 { 00:12:54.824 "name": "BaseBdev2", 00:12:54.824 "uuid": "5b6c820f-feb0-4c8d-b884-10209bf603f5", 00:12:54.824 "is_configured": true, 00:12:54.824 "data_offset": 0, 00:12:54.824 "data_size": 65536 00:12:54.824 }, 00:12:54.824 { 00:12:54.824 "name": "BaseBdev3", 00:12:54.824 "uuid": "078c834f-4a23-4d2b-86d6-737f15a003ed", 00:12:54.824 "is_configured": true, 00:12:54.824 "data_offset": 0, 00:12:54.824 "data_size": 65536 00:12:54.824 }, 00:12:54.824 { 00:12:54.824 "name": "BaseBdev4", 00:12:54.824 "uuid": "fbd27255-ac27-4c80-aad6-ae7500474304", 00:12:54.824 "is_configured": true, 00:12:54.824 "data_offset": 0, 00:12:54.824 "data_size": 65536 00:12:54.824 } 00:12:54.824 ] 00:12:54.824 }' 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.824 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.084 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:55.084 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:55.084 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.084 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.084 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.084 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.084 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:55.084 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.084 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.084 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.084 [2024-10-01 14:37:46.665131] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.084 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.084 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.084 "name": "Existed_Raid", 00:12:55.084 "aliases": [ 00:12:55.084 "52519fa4-56a4-4898-ad5d-be92121e984a" 00:12:55.084 ], 00:12:55.084 "product_name": "Raid Volume", 00:12:55.084 "block_size": 512, 00:12:55.084 "num_blocks": 196608, 00:12:55.084 "uuid": "52519fa4-56a4-4898-ad5d-be92121e984a", 00:12:55.084 "assigned_rate_limits": { 00:12:55.084 "rw_ios_per_sec": 0, 00:12:55.084 "rw_mbytes_per_sec": 0, 00:12:55.084 "r_mbytes_per_sec": 0, 00:12:55.084 "w_mbytes_per_sec": 0 00:12:55.084 }, 00:12:55.084 "claimed": false, 00:12:55.084 "zoned": false, 00:12:55.084 "supported_io_types": { 00:12:55.084 "read": true, 00:12:55.084 "write": true, 00:12:55.084 "unmap": false, 00:12:55.084 "flush": false, 00:12:55.084 "reset": true, 00:12:55.084 "nvme_admin": false, 00:12:55.084 "nvme_io": false, 00:12:55.084 "nvme_io_md": false, 00:12:55.084 "write_zeroes": true, 00:12:55.084 "zcopy": false, 00:12:55.084 "get_zone_info": false, 00:12:55.084 "zone_management": false, 00:12:55.084 "zone_append": false, 00:12:55.084 "compare": false, 00:12:55.084 "compare_and_write": false, 00:12:55.084 "abort": false, 00:12:55.084 "seek_hole": false, 00:12:55.084 "seek_data": false, 00:12:55.084 "copy": false, 00:12:55.084 "nvme_iov_md": false 00:12:55.084 }, 00:12:55.085 "driver_specific": { 00:12:55.085 "raid": { 00:12:55.085 "uuid": "52519fa4-56a4-4898-ad5d-be92121e984a", 00:12:55.085 "strip_size_kb": 64, 00:12:55.085 "state": "online", 00:12:55.085 "raid_level": "raid5f", 00:12:55.085 "superblock": false, 00:12:55.085 "num_base_bdevs": 4, 00:12:55.085 "num_base_bdevs_discovered": 4, 00:12:55.085 "num_base_bdevs_operational": 4, 00:12:55.085 "base_bdevs_list": [ 00:12:55.085 { 00:12:55.085 "name": "BaseBdev1", 00:12:55.085 "uuid": "0bcf704c-040c-4580-96f2-b7080855da9e", 00:12:55.085 "is_configured": true, 00:12:55.085 "data_offset": 0, 00:12:55.085 "data_size": 65536 00:12:55.085 }, 00:12:55.085 { 00:12:55.085 "name": "BaseBdev2", 00:12:55.085 "uuid": "5b6c820f-feb0-4c8d-b884-10209bf603f5", 00:12:55.085 "is_configured": true, 00:12:55.085 "data_offset": 0, 00:12:55.085 "data_size": 65536 00:12:55.085 }, 00:12:55.085 { 00:12:55.085 "name": "BaseBdev3", 00:12:55.085 "uuid": "078c834f-4a23-4d2b-86d6-737f15a003ed", 00:12:55.085 "is_configured": true, 00:12:55.085 "data_offset": 0, 00:12:55.085 "data_size": 65536 00:12:55.085 }, 00:12:55.085 { 00:12:55.085 "name": "BaseBdev4", 00:12:55.085 "uuid": "fbd27255-ac27-4c80-aad6-ae7500474304", 00:12:55.085 "is_configured": true, 00:12:55.085 "data_offset": 0, 00:12:55.085 "data_size": 65536 00:12:55.085 } 00:12:55.085 ] 00:12:55.085 } 00:12:55.085 } 00:12:55.085 }' 00:12:55.085 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.085 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:55.085 BaseBdev2 00:12:55.085 BaseBdev3 00:12:55.085 BaseBdev4' 00:12:55.085 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.085 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.085 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.085 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:55.085 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.085 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.085 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.085 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.345 [2024-10-01 14:37:46.881022] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.345 "name": "Existed_Raid", 00:12:55.345 "uuid": "52519fa4-56a4-4898-ad5d-be92121e984a", 00:12:55.345 "strip_size_kb": 64, 00:12:55.345 "state": "online", 00:12:55.345 "raid_level": "raid5f", 00:12:55.345 "superblock": false, 00:12:55.345 "num_base_bdevs": 4, 00:12:55.345 "num_base_bdevs_discovered": 3, 00:12:55.345 "num_base_bdevs_operational": 3, 00:12:55.345 "base_bdevs_list": [ 00:12:55.345 { 00:12:55.345 "name": null, 00:12:55.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.345 "is_configured": false, 00:12:55.345 "data_offset": 0, 00:12:55.345 "data_size": 65536 00:12:55.345 }, 00:12:55.345 { 00:12:55.345 "name": "BaseBdev2", 00:12:55.345 "uuid": "5b6c820f-feb0-4c8d-b884-10209bf603f5", 00:12:55.345 "is_configured": true, 00:12:55.345 "data_offset": 0, 00:12:55.345 "data_size": 65536 00:12:55.345 }, 00:12:55.345 { 00:12:55.345 "name": "BaseBdev3", 00:12:55.345 "uuid": "078c834f-4a23-4d2b-86d6-737f15a003ed", 00:12:55.345 "is_configured": true, 00:12:55.345 "data_offset": 0, 00:12:55.345 "data_size": 65536 00:12:55.345 }, 00:12:55.345 { 00:12:55.345 "name": "BaseBdev4", 00:12:55.345 "uuid": "fbd27255-ac27-4c80-aad6-ae7500474304", 00:12:55.345 "is_configured": true, 00:12:55.345 "data_offset": 0, 00:12:55.345 "data_size": 65536 00:12:55.345 } 00:12:55.345 ] 00:12:55.345 }' 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.345 14:37:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.605 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:55.605 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.605 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.605 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.605 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.605 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.605 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.605 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.605 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.605 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:55.605 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.605 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.605 [2024-10-01 14:37:47.280834] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.605 [2024-10-01 14:37:47.280934] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.864 [2024-10-01 14:37:47.339135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.865 [2024-10-01 14:37:47.363175] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.865 [2024-10-01 14:37:47.457431] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:55.865 [2024-10-01 14:37:47.457487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.865 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.126 BaseBdev2 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.126 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.126 [ 00:12:56.126 { 00:12:56.126 "name": "BaseBdev2", 00:12:56.126 "aliases": [ 00:12:56.126 "d213cf4e-85c8-4c7a-aef9-e1342f98cbcd" 00:12:56.126 ], 00:12:56.126 "product_name": "Malloc disk", 00:12:56.126 "block_size": 512, 00:12:56.126 "num_blocks": 65536, 00:12:56.126 "uuid": "d213cf4e-85c8-4c7a-aef9-e1342f98cbcd", 00:12:56.126 "assigned_rate_limits": { 00:12:56.126 "rw_ios_per_sec": 0, 00:12:56.126 "rw_mbytes_per_sec": 0, 00:12:56.126 "r_mbytes_per_sec": 0, 00:12:56.126 "w_mbytes_per_sec": 0 00:12:56.126 }, 00:12:56.126 "claimed": false, 00:12:56.126 "zoned": false, 00:12:56.126 "supported_io_types": { 00:12:56.126 "read": true, 00:12:56.126 "write": true, 00:12:56.126 "unmap": true, 00:12:56.126 "flush": true, 00:12:56.126 "reset": true, 00:12:56.126 "nvme_admin": false, 00:12:56.126 "nvme_io": false, 00:12:56.126 "nvme_io_md": false, 00:12:56.126 "write_zeroes": true, 00:12:56.126 "zcopy": true, 00:12:56.126 "get_zone_info": false, 00:12:56.127 "zone_management": false, 00:12:56.127 "zone_append": false, 00:12:56.127 "compare": false, 00:12:56.127 "compare_and_write": false, 00:12:56.127 "abort": true, 00:12:56.127 "seek_hole": false, 00:12:56.127 "seek_data": false, 00:12:56.127 "copy": true, 00:12:56.127 "nvme_iov_md": false 00:12:56.127 }, 00:12:56.127 "memory_domains": [ 00:12:56.127 { 00:12:56.127 "dma_device_id": "system", 00:12:56.127 "dma_device_type": 1 00:12:56.127 }, 00:12:56.127 { 00:12:56.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.127 "dma_device_type": 2 00:12:56.127 } 00:12:56.127 ], 00:12:56.127 "driver_specific": {} 00:12:56.127 } 00:12:56.127 ] 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.127 BaseBdev3 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.127 [ 00:12:56.127 { 00:12:56.127 "name": "BaseBdev3", 00:12:56.127 "aliases": [ 00:12:56.127 "1e272a64-a78e-4d5d-bed0-eccbf7d59da4" 00:12:56.127 ], 00:12:56.127 "product_name": "Malloc disk", 00:12:56.127 "block_size": 512, 00:12:56.127 "num_blocks": 65536, 00:12:56.127 "uuid": "1e272a64-a78e-4d5d-bed0-eccbf7d59da4", 00:12:56.127 "assigned_rate_limits": { 00:12:56.127 "rw_ios_per_sec": 0, 00:12:56.127 "rw_mbytes_per_sec": 0, 00:12:56.127 "r_mbytes_per_sec": 0, 00:12:56.127 "w_mbytes_per_sec": 0 00:12:56.127 }, 00:12:56.127 "claimed": false, 00:12:56.127 "zoned": false, 00:12:56.127 "supported_io_types": { 00:12:56.127 "read": true, 00:12:56.127 "write": true, 00:12:56.127 "unmap": true, 00:12:56.127 "flush": true, 00:12:56.127 "reset": true, 00:12:56.127 "nvme_admin": false, 00:12:56.127 "nvme_io": false, 00:12:56.127 "nvme_io_md": false, 00:12:56.127 "write_zeroes": true, 00:12:56.127 "zcopy": true, 00:12:56.127 "get_zone_info": false, 00:12:56.127 "zone_management": false, 00:12:56.127 "zone_append": false, 00:12:56.127 "compare": false, 00:12:56.127 "compare_and_write": false, 00:12:56.127 "abort": true, 00:12:56.127 "seek_hole": false, 00:12:56.127 "seek_data": false, 00:12:56.127 "copy": true, 00:12:56.127 "nvme_iov_md": false 00:12:56.127 }, 00:12:56.127 "memory_domains": [ 00:12:56.127 { 00:12:56.127 "dma_device_id": "system", 00:12:56.127 "dma_device_type": 1 00:12:56.127 }, 00:12:56.127 { 00:12:56.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.127 "dma_device_type": 2 00:12:56.127 } 00:12:56.127 ], 00:12:56.127 "driver_specific": {} 00:12:56.127 } 00:12:56.127 ] 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.127 BaseBdev4 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.127 [ 00:12:56.127 { 00:12:56.127 "name": "BaseBdev4", 00:12:56.127 "aliases": [ 00:12:56.127 "b85c6e92-09d9-41ae-a3a1-afa0c513a9fd" 00:12:56.127 ], 00:12:56.127 "product_name": "Malloc disk", 00:12:56.127 "block_size": 512, 00:12:56.127 "num_blocks": 65536, 00:12:56.127 "uuid": "b85c6e92-09d9-41ae-a3a1-afa0c513a9fd", 00:12:56.127 "assigned_rate_limits": { 00:12:56.127 "rw_ios_per_sec": 0, 00:12:56.127 "rw_mbytes_per_sec": 0, 00:12:56.127 "r_mbytes_per_sec": 0, 00:12:56.127 "w_mbytes_per_sec": 0 00:12:56.127 }, 00:12:56.127 "claimed": false, 00:12:56.127 "zoned": false, 00:12:56.127 "supported_io_types": { 00:12:56.127 "read": true, 00:12:56.127 "write": true, 00:12:56.127 "unmap": true, 00:12:56.127 "flush": true, 00:12:56.127 "reset": true, 00:12:56.127 "nvme_admin": false, 00:12:56.127 "nvme_io": false, 00:12:56.127 "nvme_io_md": false, 00:12:56.127 "write_zeroes": true, 00:12:56.127 "zcopy": true, 00:12:56.127 "get_zone_info": false, 00:12:56.127 "zone_management": false, 00:12:56.127 "zone_append": false, 00:12:56.127 "compare": false, 00:12:56.127 "compare_and_write": false, 00:12:56.127 "abort": true, 00:12:56.127 "seek_hole": false, 00:12:56.127 "seek_data": false, 00:12:56.127 "copy": true, 00:12:56.127 "nvme_iov_md": false 00:12:56.127 }, 00:12:56.127 "memory_domains": [ 00:12:56.127 { 00:12:56.127 "dma_device_id": "system", 00:12:56.127 "dma_device_type": 1 00:12:56.127 }, 00:12:56.127 { 00:12:56.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.127 "dma_device_type": 2 00:12:56.127 } 00:12:56.127 ], 00:12:56.127 "driver_specific": {} 00:12:56.127 } 00:12:56.127 ] 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.127 [2024-10-01 14:37:47.734953] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:56.127 [2024-10-01 14:37:47.735016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:56.127 [2024-10-01 14:37:47.735039] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.127 [2024-10-01 14:37:47.736911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.127 [2024-10-01 14:37:47.736963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.127 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.128 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.128 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.128 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.128 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.128 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.128 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.128 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.128 "name": "Existed_Raid", 00:12:56.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.128 "strip_size_kb": 64, 00:12:56.128 "state": "configuring", 00:12:56.128 "raid_level": "raid5f", 00:12:56.128 "superblock": false, 00:12:56.128 "num_base_bdevs": 4, 00:12:56.128 "num_base_bdevs_discovered": 3, 00:12:56.128 "num_base_bdevs_operational": 4, 00:12:56.128 "base_bdevs_list": [ 00:12:56.128 { 00:12:56.128 "name": "BaseBdev1", 00:12:56.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.128 "is_configured": false, 00:12:56.128 "data_offset": 0, 00:12:56.128 "data_size": 0 00:12:56.128 }, 00:12:56.128 { 00:12:56.128 "name": "BaseBdev2", 00:12:56.128 "uuid": "d213cf4e-85c8-4c7a-aef9-e1342f98cbcd", 00:12:56.128 "is_configured": true, 00:12:56.128 "data_offset": 0, 00:12:56.128 "data_size": 65536 00:12:56.128 }, 00:12:56.128 { 00:12:56.128 "name": "BaseBdev3", 00:12:56.128 "uuid": "1e272a64-a78e-4d5d-bed0-eccbf7d59da4", 00:12:56.128 "is_configured": true, 00:12:56.128 "data_offset": 0, 00:12:56.128 "data_size": 65536 00:12:56.128 }, 00:12:56.128 { 00:12:56.128 "name": "BaseBdev4", 00:12:56.128 "uuid": "b85c6e92-09d9-41ae-a3a1-afa0c513a9fd", 00:12:56.128 "is_configured": true, 00:12:56.128 "data_offset": 0, 00:12:56.128 "data_size": 65536 00:12:56.128 } 00:12:56.128 ] 00:12:56.128 }' 00:12:56.128 14:37:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.128 14:37:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.387 [2024-10-01 14:37:48.043011] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.387 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.648 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.648 "name": "Existed_Raid", 00:12:56.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.648 "strip_size_kb": 64, 00:12:56.649 "state": "configuring", 00:12:56.649 "raid_level": "raid5f", 00:12:56.649 "superblock": false, 00:12:56.649 "num_base_bdevs": 4, 00:12:56.649 "num_base_bdevs_discovered": 2, 00:12:56.649 "num_base_bdevs_operational": 4, 00:12:56.649 "base_bdevs_list": [ 00:12:56.649 { 00:12:56.649 "name": "BaseBdev1", 00:12:56.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.649 "is_configured": false, 00:12:56.649 "data_offset": 0, 00:12:56.649 "data_size": 0 00:12:56.649 }, 00:12:56.649 { 00:12:56.649 "name": null, 00:12:56.649 "uuid": "d213cf4e-85c8-4c7a-aef9-e1342f98cbcd", 00:12:56.649 "is_configured": false, 00:12:56.649 "data_offset": 0, 00:12:56.649 "data_size": 65536 00:12:56.649 }, 00:12:56.649 { 00:12:56.649 "name": "BaseBdev3", 00:12:56.649 "uuid": "1e272a64-a78e-4d5d-bed0-eccbf7d59da4", 00:12:56.649 "is_configured": true, 00:12:56.649 "data_offset": 0, 00:12:56.649 "data_size": 65536 00:12:56.649 }, 00:12:56.649 { 00:12:56.649 "name": "BaseBdev4", 00:12:56.649 "uuid": "b85c6e92-09d9-41ae-a3a1-afa0c513a9fd", 00:12:56.649 "is_configured": true, 00:12:56.649 "data_offset": 0, 00:12:56.649 "data_size": 65536 00:12:56.649 } 00:12:56.649 ] 00:12:56.649 }' 00:12:56.649 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.649 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.910 [2024-10-01 14:37:48.425502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.910 BaseBdev1 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.910 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.910 [ 00:12:56.910 { 00:12:56.910 "name": "BaseBdev1", 00:12:56.910 "aliases": [ 00:12:56.910 "d977e033-a21a-4710-8cac-bbe93195337d" 00:12:56.910 ], 00:12:56.910 "product_name": "Malloc disk", 00:12:56.910 "block_size": 512, 00:12:56.910 "num_blocks": 65536, 00:12:56.910 "uuid": "d977e033-a21a-4710-8cac-bbe93195337d", 00:12:56.910 "assigned_rate_limits": { 00:12:56.910 "rw_ios_per_sec": 0, 00:12:56.910 "rw_mbytes_per_sec": 0, 00:12:56.910 "r_mbytes_per_sec": 0, 00:12:56.910 "w_mbytes_per_sec": 0 00:12:56.910 }, 00:12:56.910 "claimed": true, 00:12:56.910 "claim_type": "exclusive_write", 00:12:56.910 "zoned": false, 00:12:56.910 "supported_io_types": { 00:12:56.910 "read": true, 00:12:56.910 "write": true, 00:12:56.910 "unmap": true, 00:12:56.910 "flush": true, 00:12:56.910 "reset": true, 00:12:56.910 "nvme_admin": false, 00:12:56.910 "nvme_io": false, 00:12:56.910 "nvme_io_md": false, 00:12:56.910 "write_zeroes": true, 00:12:56.910 "zcopy": true, 00:12:56.910 "get_zone_info": false, 00:12:56.910 "zone_management": false, 00:12:56.910 "zone_append": false, 00:12:56.910 "compare": false, 00:12:56.910 "compare_and_write": false, 00:12:56.910 "abort": true, 00:12:56.910 "seek_hole": false, 00:12:56.910 "seek_data": false, 00:12:56.910 "copy": true, 00:12:56.910 "nvme_iov_md": false 00:12:56.911 }, 00:12:56.911 "memory_domains": [ 00:12:56.911 { 00:12:56.911 "dma_device_id": "system", 00:12:56.911 "dma_device_type": 1 00:12:56.911 }, 00:12:56.911 { 00:12:56.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.911 "dma_device_type": 2 00:12:56.911 } 00:12:56.911 ], 00:12:56.911 "driver_specific": {} 00:12:56.911 } 00:12:56.911 ] 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.911 "name": "Existed_Raid", 00:12:56.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.911 "strip_size_kb": 64, 00:12:56.911 "state": "configuring", 00:12:56.911 "raid_level": "raid5f", 00:12:56.911 "superblock": false, 00:12:56.911 "num_base_bdevs": 4, 00:12:56.911 "num_base_bdevs_discovered": 3, 00:12:56.911 "num_base_bdevs_operational": 4, 00:12:56.911 "base_bdevs_list": [ 00:12:56.911 { 00:12:56.911 "name": "BaseBdev1", 00:12:56.911 "uuid": "d977e033-a21a-4710-8cac-bbe93195337d", 00:12:56.911 "is_configured": true, 00:12:56.911 "data_offset": 0, 00:12:56.911 "data_size": 65536 00:12:56.911 }, 00:12:56.911 { 00:12:56.911 "name": null, 00:12:56.911 "uuid": "d213cf4e-85c8-4c7a-aef9-e1342f98cbcd", 00:12:56.911 "is_configured": false, 00:12:56.911 "data_offset": 0, 00:12:56.911 "data_size": 65536 00:12:56.911 }, 00:12:56.911 { 00:12:56.911 "name": "BaseBdev3", 00:12:56.911 "uuid": "1e272a64-a78e-4d5d-bed0-eccbf7d59da4", 00:12:56.911 "is_configured": true, 00:12:56.911 "data_offset": 0, 00:12:56.911 "data_size": 65536 00:12:56.911 }, 00:12:56.911 { 00:12:56.911 "name": "BaseBdev4", 00:12:56.911 "uuid": "b85c6e92-09d9-41ae-a3a1-afa0c513a9fd", 00:12:56.911 "is_configured": true, 00:12:56.911 "data_offset": 0, 00:12:56.911 "data_size": 65536 00:12:56.911 } 00:12:56.911 ] 00:12:56.911 }' 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.911 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.171 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.171 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.171 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.171 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:57.171 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.172 [2024-10-01 14:37:48.801655] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.172 "name": "Existed_Raid", 00:12:57.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.172 "strip_size_kb": 64, 00:12:57.172 "state": "configuring", 00:12:57.172 "raid_level": "raid5f", 00:12:57.172 "superblock": false, 00:12:57.172 "num_base_bdevs": 4, 00:12:57.172 "num_base_bdevs_discovered": 2, 00:12:57.172 "num_base_bdevs_operational": 4, 00:12:57.172 "base_bdevs_list": [ 00:12:57.172 { 00:12:57.172 "name": "BaseBdev1", 00:12:57.172 "uuid": "d977e033-a21a-4710-8cac-bbe93195337d", 00:12:57.172 "is_configured": true, 00:12:57.172 "data_offset": 0, 00:12:57.172 "data_size": 65536 00:12:57.172 }, 00:12:57.172 { 00:12:57.172 "name": null, 00:12:57.172 "uuid": "d213cf4e-85c8-4c7a-aef9-e1342f98cbcd", 00:12:57.172 "is_configured": false, 00:12:57.172 "data_offset": 0, 00:12:57.172 "data_size": 65536 00:12:57.172 }, 00:12:57.172 { 00:12:57.172 "name": null, 00:12:57.172 "uuid": "1e272a64-a78e-4d5d-bed0-eccbf7d59da4", 00:12:57.172 "is_configured": false, 00:12:57.172 "data_offset": 0, 00:12:57.172 "data_size": 65536 00:12:57.172 }, 00:12:57.172 { 00:12:57.172 "name": "BaseBdev4", 00:12:57.172 "uuid": "b85c6e92-09d9-41ae-a3a1-afa0c513a9fd", 00:12:57.172 "is_configured": true, 00:12:57.172 "data_offset": 0, 00:12:57.172 "data_size": 65536 00:12:57.172 } 00:12:57.172 ] 00:12:57.172 }' 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.172 14:37:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.432 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.432 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.432 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.432 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.693 [2024-10-01 14:37:49.137771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.693 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.694 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.694 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.694 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.694 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.694 "name": "Existed_Raid", 00:12:57.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.694 "strip_size_kb": 64, 00:12:57.694 "state": "configuring", 00:12:57.694 "raid_level": "raid5f", 00:12:57.694 "superblock": false, 00:12:57.694 "num_base_bdevs": 4, 00:12:57.694 "num_base_bdevs_discovered": 3, 00:12:57.694 "num_base_bdevs_operational": 4, 00:12:57.694 "base_bdevs_list": [ 00:12:57.694 { 00:12:57.694 "name": "BaseBdev1", 00:12:57.694 "uuid": "d977e033-a21a-4710-8cac-bbe93195337d", 00:12:57.694 "is_configured": true, 00:12:57.694 "data_offset": 0, 00:12:57.694 "data_size": 65536 00:12:57.694 }, 00:12:57.694 { 00:12:57.694 "name": null, 00:12:57.694 "uuid": "d213cf4e-85c8-4c7a-aef9-e1342f98cbcd", 00:12:57.694 "is_configured": false, 00:12:57.694 "data_offset": 0, 00:12:57.694 "data_size": 65536 00:12:57.694 }, 00:12:57.694 { 00:12:57.694 "name": "BaseBdev3", 00:12:57.694 "uuid": "1e272a64-a78e-4d5d-bed0-eccbf7d59da4", 00:12:57.694 "is_configured": true, 00:12:57.694 "data_offset": 0, 00:12:57.694 "data_size": 65536 00:12:57.694 }, 00:12:57.694 { 00:12:57.694 "name": "BaseBdev4", 00:12:57.694 "uuid": "b85c6e92-09d9-41ae-a3a1-afa0c513a9fd", 00:12:57.694 "is_configured": true, 00:12:57.694 "data_offset": 0, 00:12:57.694 "data_size": 65536 00:12:57.694 } 00:12:57.694 ] 00:12:57.694 }' 00:12:57.694 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.694 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.955 [2024-10-01 14:37:49.493846] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.955 "name": "Existed_Raid", 00:12:57.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.955 "strip_size_kb": 64, 00:12:57.955 "state": "configuring", 00:12:57.955 "raid_level": "raid5f", 00:12:57.955 "superblock": false, 00:12:57.955 "num_base_bdevs": 4, 00:12:57.955 "num_base_bdevs_discovered": 2, 00:12:57.955 "num_base_bdevs_operational": 4, 00:12:57.955 "base_bdevs_list": [ 00:12:57.955 { 00:12:57.955 "name": null, 00:12:57.955 "uuid": "d977e033-a21a-4710-8cac-bbe93195337d", 00:12:57.955 "is_configured": false, 00:12:57.955 "data_offset": 0, 00:12:57.955 "data_size": 65536 00:12:57.955 }, 00:12:57.955 { 00:12:57.955 "name": null, 00:12:57.955 "uuid": "d213cf4e-85c8-4c7a-aef9-e1342f98cbcd", 00:12:57.955 "is_configured": false, 00:12:57.955 "data_offset": 0, 00:12:57.955 "data_size": 65536 00:12:57.955 }, 00:12:57.955 { 00:12:57.955 "name": "BaseBdev3", 00:12:57.955 "uuid": "1e272a64-a78e-4d5d-bed0-eccbf7d59da4", 00:12:57.955 "is_configured": true, 00:12:57.955 "data_offset": 0, 00:12:57.955 "data_size": 65536 00:12:57.955 }, 00:12:57.955 { 00:12:57.955 "name": "BaseBdev4", 00:12:57.955 "uuid": "b85c6e92-09d9-41ae-a3a1-afa0c513a9fd", 00:12:57.955 "is_configured": true, 00:12:57.955 "data_offset": 0, 00:12:57.955 "data_size": 65536 00:12:57.955 } 00:12:57.955 ] 00:12:57.955 }' 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.955 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.215 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:58.215 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.215 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.215 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.215 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.215 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:58.215 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:58.215 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.215 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.215 [2024-10-01 14:37:49.897855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.475 "name": "Existed_Raid", 00:12:58.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.475 "strip_size_kb": 64, 00:12:58.475 "state": "configuring", 00:12:58.475 "raid_level": "raid5f", 00:12:58.475 "superblock": false, 00:12:58.475 "num_base_bdevs": 4, 00:12:58.475 "num_base_bdevs_discovered": 3, 00:12:58.475 "num_base_bdevs_operational": 4, 00:12:58.475 "base_bdevs_list": [ 00:12:58.475 { 00:12:58.475 "name": null, 00:12:58.475 "uuid": "d977e033-a21a-4710-8cac-bbe93195337d", 00:12:58.475 "is_configured": false, 00:12:58.475 "data_offset": 0, 00:12:58.475 "data_size": 65536 00:12:58.475 }, 00:12:58.475 { 00:12:58.475 "name": "BaseBdev2", 00:12:58.475 "uuid": "d213cf4e-85c8-4c7a-aef9-e1342f98cbcd", 00:12:58.475 "is_configured": true, 00:12:58.475 "data_offset": 0, 00:12:58.475 "data_size": 65536 00:12:58.475 }, 00:12:58.475 { 00:12:58.475 "name": "BaseBdev3", 00:12:58.475 "uuid": "1e272a64-a78e-4d5d-bed0-eccbf7d59da4", 00:12:58.475 "is_configured": true, 00:12:58.475 "data_offset": 0, 00:12:58.475 "data_size": 65536 00:12:58.475 }, 00:12:58.475 { 00:12:58.475 "name": "BaseBdev4", 00:12:58.475 "uuid": "b85c6e92-09d9-41ae-a3a1-afa0c513a9fd", 00:12:58.475 "is_configured": true, 00:12:58.475 "data_offset": 0, 00:12:58.475 "data_size": 65536 00:12:58.475 } 00:12:58.475 ] 00:12:58.475 }' 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.475 14:37:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d977e033-a21a-4710-8cac-bbe93195337d 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.736 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.736 [2024-10-01 14:37:50.288285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:58.736 [2024-10-01 14:37:50.288338] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:58.736 [2024-10-01 14:37:50.288345] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:58.736 [2024-10-01 14:37:50.288591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:58.737 [2024-10-01 14:37:50.293301] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:58.737 [2024-10-01 14:37:50.293326] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:58.737 NewBaseBdev 00:12:58.737 [2024-10-01 14:37:50.293544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.737 [ 00:12:58.737 { 00:12:58.737 "name": "NewBaseBdev", 00:12:58.737 "aliases": [ 00:12:58.737 "d977e033-a21a-4710-8cac-bbe93195337d" 00:12:58.737 ], 00:12:58.737 "product_name": "Malloc disk", 00:12:58.737 "block_size": 512, 00:12:58.737 "num_blocks": 65536, 00:12:58.737 "uuid": "d977e033-a21a-4710-8cac-bbe93195337d", 00:12:58.737 "assigned_rate_limits": { 00:12:58.737 "rw_ios_per_sec": 0, 00:12:58.737 "rw_mbytes_per_sec": 0, 00:12:58.737 "r_mbytes_per_sec": 0, 00:12:58.737 "w_mbytes_per_sec": 0 00:12:58.737 }, 00:12:58.737 "claimed": true, 00:12:58.737 "claim_type": "exclusive_write", 00:12:58.737 "zoned": false, 00:12:58.737 "supported_io_types": { 00:12:58.737 "read": true, 00:12:58.737 "write": true, 00:12:58.737 "unmap": true, 00:12:58.737 "flush": true, 00:12:58.737 "reset": true, 00:12:58.737 "nvme_admin": false, 00:12:58.737 "nvme_io": false, 00:12:58.737 "nvme_io_md": false, 00:12:58.737 "write_zeroes": true, 00:12:58.737 "zcopy": true, 00:12:58.737 "get_zone_info": false, 00:12:58.737 "zone_management": false, 00:12:58.737 "zone_append": false, 00:12:58.737 "compare": false, 00:12:58.737 "compare_and_write": false, 00:12:58.737 "abort": true, 00:12:58.737 "seek_hole": false, 00:12:58.737 "seek_data": false, 00:12:58.737 "copy": true, 00:12:58.737 "nvme_iov_md": false 00:12:58.737 }, 00:12:58.737 "memory_domains": [ 00:12:58.737 { 00:12:58.737 "dma_device_id": "system", 00:12:58.737 "dma_device_type": 1 00:12:58.737 }, 00:12:58.737 { 00:12:58.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.737 "dma_device_type": 2 00:12:58.737 } 00:12:58.737 ], 00:12:58.737 "driver_specific": {} 00:12:58.737 } 00:12:58.737 ] 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.737 "name": "Existed_Raid", 00:12:58.737 "uuid": "4a3205a3-930d-48a6-8b81-f0813b124d14", 00:12:58.737 "strip_size_kb": 64, 00:12:58.737 "state": "online", 00:12:58.737 "raid_level": "raid5f", 00:12:58.737 "superblock": false, 00:12:58.737 "num_base_bdevs": 4, 00:12:58.737 "num_base_bdevs_discovered": 4, 00:12:58.737 "num_base_bdevs_operational": 4, 00:12:58.737 "base_bdevs_list": [ 00:12:58.737 { 00:12:58.737 "name": "NewBaseBdev", 00:12:58.737 "uuid": "d977e033-a21a-4710-8cac-bbe93195337d", 00:12:58.737 "is_configured": true, 00:12:58.737 "data_offset": 0, 00:12:58.737 "data_size": 65536 00:12:58.737 }, 00:12:58.737 { 00:12:58.737 "name": "BaseBdev2", 00:12:58.737 "uuid": "d213cf4e-85c8-4c7a-aef9-e1342f98cbcd", 00:12:58.737 "is_configured": true, 00:12:58.737 "data_offset": 0, 00:12:58.737 "data_size": 65536 00:12:58.737 }, 00:12:58.737 { 00:12:58.737 "name": "BaseBdev3", 00:12:58.737 "uuid": "1e272a64-a78e-4d5d-bed0-eccbf7d59da4", 00:12:58.737 "is_configured": true, 00:12:58.737 "data_offset": 0, 00:12:58.737 "data_size": 65536 00:12:58.737 }, 00:12:58.737 { 00:12:58.737 "name": "BaseBdev4", 00:12:58.737 "uuid": "b85c6e92-09d9-41ae-a3a1-afa0c513a9fd", 00:12:58.737 "is_configured": true, 00:12:58.737 "data_offset": 0, 00:12:58.737 "data_size": 65536 00:12:58.737 } 00:12:58.737 ] 00:12:58.737 }' 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.737 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.999 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:58.999 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:58.999 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:58.999 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:58.999 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:58.999 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:58.999 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:58.999 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:58.999 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.999 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.999 [2024-10-01 14:37:50.643081] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.999 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.999 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:58.999 "name": "Existed_Raid", 00:12:58.999 "aliases": [ 00:12:58.999 "4a3205a3-930d-48a6-8b81-f0813b124d14" 00:12:58.999 ], 00:12:58.999 "product_name": "Raid Volume", 00:12:58.999 "block_size": 512, 00:12:58.999 "num_blocks": 196608, 00:12:58.999 "uuid": "4a3205a3-930d-48a6-8b81-f0813b124d14", 00:12:58.999 "assigned_rate_limits": { 00:12:58.999 "rw_ios_per_sec": 0, 00:12:58.999 "rw_mbytes_per_sec": 0, 00:12:58.999 "r_mbytes_per_sec": 0, 00:12:58.999 "w_mbytes_per_sec": 0 00:12:58.999 }, 00:12:58.999 "claimed": false, 00:12:58.999 "zoned": false, 00:12:58.999 "supported_io_types": { 00:12:58.999 "read": true, 00:12:58.999 "write": true, 00:12:58.999 "unmap": false, 00:12:58.999 "flush": false, 00:12:58.999 "reset": true, 00:12:58.999 "nvme_admin": false, 00:12:58.999 "nvme_io": false, 00:12:58.999 "nvme_io_md": false, 00:12:58.999 "write_zeroes": true, 00:12:58.999 "zcopy": false, 00:12:58.999 "get_zone_info": false, 00:12:58.999 "zone_management": false, 00:12:58.999 "zone_append": false, 00:12:58.999 "compare": false, 00:12:58.999 "compare_and_write": false, 00:12:58.999 "abort": false, 00:12:58.999 "seek_hole": false, 00:12:58.999 "seek_data": false, 00:12:58.999 "copy": false, 00:12:58.999 "nvme_iov_md": false 00:12:58.999 }, 00:12:58.999 "driver_specific": { 00:12:58.999 "raid": { 00:12:58.999 "uuid": "4a3205a3-930d-48a6-8b81-f0813b124d14", 00:12:58.999 "strip_size_kb": 64, 00:12:58.999 "state": "online", 00:12:58.999 "raid_level": "raid5f", 00:12:58.999 "superblock": false, 00:12:58.999 "num_base_bdevs": 4, 00:12:58.999 "num_base_bdevs_discovered": 4, 00:12:58.999 "num_base_bdevs_operational": 4, 00:12:58.999 "base_bdevs_list": [ 00:12:58.999 { 00:12:58.999 "name": "NewBaseBdev", 00:12:58.999 "uuid": "d977e033-a21a-4710-8cac-bbe93195337d", 00:12:58.999 "is_configured": true, 00:12:58.999 "data_offset": 0, 00:12:58.999 "data_size": 65536 00:12:58.999 }, 00:12:58.999 { 00:12:58.999 "name": "BaseBdev2", 00:12:58.999 "uuid": "d213cf4e-85c8-4c7a-aef9-e1342f98cbcd", 00:12:58.999 "is_configured": true, 00:12:58.999 "data_offset": 0, 00:12:58.999 "data_size": 65536 00:12:58.999 }, 00:12:58.999 { 00:12:58.999 "name": "BaseBdev3", 00:12:58.999 "uuid": "1e272a64-a78e-4d5d-bed0-eccbf7d59da4", 00:12:58.999 "is_configured": true, 00:12:58.999 "data_offset": 0, 00:12:58.999 "data_size": 65536 00:12:58.999 }, 00:12:58.999 { 00:12:58.999 "name": "BaseBdev4", 00:12:58.999 "uuid": "b85c6e92-09d9-41ae-a3a1-afa0c513a9fd", 00:12:58.999 "is_configured": true, 00:12:58.999 "data_offset": 0, 00:12:58.999 "data_size": 65536 00:12:58.999 } 00:12:58.999 ] 00:12:58.999 } 00:12:58.999 } 00:12:58.999 }' 00:12:58.999 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:59.281 BaseBdev2 00:12:59.281 BaseBdev3 00:12:59.281 BaseBdev4' 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.281 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.282 [2024-10-01 14:37:50.870910] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.282 [2024-10-01 14:37:50.870947] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.282 [2024-10-01 14:37:50.871023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.282 [2024-10-01 14:37:50.871320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.282 [2024-10-01 14:37:50.871338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80709 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80709 ']' 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80709 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80709 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.282 killing process with pid 80709 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80709' 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 80709 00:12:59.282 [2024-10-01 14:37:50.899657] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.282 14:37:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 80709 00:12:59.543 [2024-10-01 14:37:51.142465] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.484 14:37:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:00.484 00:13:00.484 real 0m8.487s 00:13:00.484 user 0m13.353s 00:13:00.484 sys 0m1.467s 00:13:00.484 14:37:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:00.484 14:37:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.484 ************************************ 00:13:00.484 END TEST raid5f_state_function_test 00:13:00.484 ************************************ 00:13:00.484 14:37:52 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:13:00.484 14:37:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:00.484 14:37:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.484 14:37:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.484 ************************************ 00:13:00.484 START TEST raid5f_state_function_test_sb 00:13:00.484 ************************************ 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:00.484 Process raid pid: 81349 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81349 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81349' 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81349 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81349 ']' 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:00.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.484 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:00.484 [2024-10-01 14:37:52.086960] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:13:00.484 [2024-10-01 14:37:52.087085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.746 [2024-10-01 14:37:52.238873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.007 [2024-10-01 14:37:52.433039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.007 [2024-10-01 14:37:52.572994] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.007 [2024-10-01 14:37:52.573039] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.270 [2024-10-01 14:37:52.937481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:01.270 [2024-10-01 14:37:52.937542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:01.270 [2024-10-01 14:37:52.937552] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.270 [2024-10-01 14:37:52.937562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.270 [2024-10-01 14:37:52.937569] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.270 [2024-10-01 14:37:52.937579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.270 [2024-10-01 14:37:52.937586] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:01.270 [2024-10-01 14:37:52.937594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.270 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.531 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.531 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.531 "name": "Existed_Raid", 00:13:01.531 "uuid": "d00f6338-a5a3-4167-bff1-4d0d95b2f7a5", 00:13:01.531 "strip_size_kb": 64, 00:13:01.531 "state": "configuring", 00:13:01.531 "raid_level": "raid5f", 00:13:01.531 "superblock": true, 00:13:01.531 "num_base_bdevs": 4, 00:13:01.531 "num_base_bdevs_discovered": 0, 00:13:01.531 "num_base_bdevs_operational": 4, 00:13:01.531 "base_bdevs_list": [ 00:13:01.531 { 00:13:01.531 "name": "BaseBdev1", 00:13:01.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.531 "is_configured": false, 00:13:01.531 "data_offset": 0, 00:13:01.531 "data_size": 0 00:13:01.531 }, 00:13:01.531 { 00:13:01.531 "name": "BaseBdev2", 00:13:01.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.531 "is_configured": false, 00:13:01.531 "data_offset": 0, 00:13:01.531 "data_size": 0 00:13:01.531 }, 00:13:01.531 { 00:13:01.531 "name": "BaseBdev3", 00:13:01.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.531 "is_configured": false, 00:13:01.531 "data_offset": 0, 00:13:01.531 "data_size": 0 00:13:01.531 }, 00:13:01.531 { 00:13:01.531 "name": "BaseBdev4", 00:13:01.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.531 "is_configured": false, 00:13:01.531 "data_offset": 0, 00:13:01.531 "data_size": 0 00:13:01.531 } 00:13:01.531 ] 00:13:01.531 }' 00:13:01.531 14:37:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.531 14:37:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.792 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:01.792 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.792 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.792 [2024-10-01 14:37:53.253463] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.792 [2024-10-01 14:37:53.253513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:01.792 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.792 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:01.792 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.792 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.792 [2024-10-01 14:37:53.261479] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:01.792 [2024-10-01 14:37:53.261522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:01.792 [2024-10-01 14:37:53.261531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.792 [2024-10-01 14:37:53.261540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.792 [2024-10-01 14:37:53.261546] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.792 [2024-10-01 14:37:53.261555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.792 [2024-10-01 14:37:53.261561] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:01.792 [2024-10-01 14:37:53.261570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:01.792 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.792 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:01.792 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.792 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.792 [2024-10-01 14:37:53.306942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.792 BaseBdev1 00:13:01.792 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.792 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.793 [ 00:13:01.793 { 00:13:01.793 "name": "BaseBdev1", 00:13:01.793 "aliases": [ 00:13:01.793 "73a9bb82-6d41-4504-bdf8-b474ca28c706" 00:13:01.793 ], 00:13:01.793 "product_name": "Malloc disk", 00:13:01.793 "block_size": 512, 00:13:01.793 "num_blocks": 65536, 00:13:01.793 "uuid": "73a9bb82-6d41-4504-bdf8-b474ca28c706", 00:13:01.793 "assigned_rate_limits": { 00:13:01.793 "rw_ios_per_sec": 0, 00:13:01.793 "rw_mbytes_per_sec": 0, 00:13:01.793 "r_mbytes_per_sec": 0, 00:13:01.793 "w_mbytes_per_sec": 0 00:13:01.793 }, 00:13:01.793 "claimed": true, 00:13:01.793 "claim_type": "exclusive_write", 00:13:01.793 "zoned": false, 00:13:01.793 "supported_io_types": { 00:13:01.793 "read": true, 00:13:01.793 "write": true, 00:13:01.793 "unmap": true, 00:13:01.793 "flush": true, 00:13:01.793 "reset": true, 00:13:01.793 "nvme_admin": false, 00:13:01.793 "nvme_io": false, 00:13:01.793 "nvme_io_md": false, 00:13:01.793 "write_zeroes": true, 00:13:01.793 "zcopy": true, 00:13:01.793 "get_zone_info": false, 00:13:01.793 "zone_management": false, 00:13:01.793 "zone_append": false, 00:13:01.793 "compare": false, 00:13:01.793 "compare_and_write": false, 00:13:01.793 "abort": true, 00:13:01.793 "seek_hole": false, 00:13:01.793 "seek_data": false, 00:13:01.793 "copy": true, 00:13:01.793 "nvme_iov_md": false 00:13:01.793 }, 00:13:01.793 "memory_domains": [ 00:13:01.793 { 00:13:01.793 "dma_device_id": "system", 00:13:01.793 "dma_device_type": 1 00:13:01.793 }, 00:13:01.793 { 00:13:01.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.793 "dma_device_type": 2 00:13:01.793 } 00:13:01.793 ], 00:13:01.793 "driver_specific": {} 00:13:01.793 } 00:13:01.793 ] 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.793 "name": "Existed_Raid", 00:13:01.793 "uuid": "1b60d93f-1749-4968-9da7-0f864ed4bb2f", 00:13:01.793 "strip_size_kb": 64, 00:13:01.793 "state": "configuring", 00:13:01.793 "raid_level": "raid5f", 00:13:01.793 "superblock": true, 00:13:01.793 "num_base_bdevs": 4, 00:13:01.793 "num_base_bdevs_discovered": 1, 00:13:01.793 "num_base_bdevs_operational": 4, 00:13:01.793 "base_bdevs_list": [ 00:13:01.793 { 00:13:01.793 "name": "BaseBdev1", 00:13:01.793 "uuid": "73a9bb82-6d41-4504-bdf8-b474ca28c706", 00:13:01.793 "is_configured": true, 00:13:01.793 "data_offset": 2048, 00:13:01.793 "data_size": 63488 00:13:01.793 }, 00:13:01.793 { 00:13:01.793 "name": "BaseBdev2", 00:13:01.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.793 "is_configured": false, 00:13:01.793 "data_offset": 0, 00:13:01.793 "data_size": 0 00:13:01.793 }, 00:13:01.793 { 00:13:01.793 "name": "BaseBdev3", 00:13:01.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.793 "is_configured": false, 00:13:01.793 "data_offset": 0, 00:13:01.793 "data_size": 0 00:13:01.793 }, 00:13:01.793 { 00:13:01.793 "name": "BaseBdev4", 00:13:01.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.793 "is_configured": false, 00:13:01.793 "data_offset": 0, 00:13:01.793 "data_size": 0 00:13:01.793 } 00:13:01.793 ] 00:13:01.793 }' 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.793 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.054 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:02.054 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.054 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.054 [2024-10-01 14:37:53.647076] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:02.054 [2024-10-01 14:37:53.647142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:02.054 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.054 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:02.054 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.054 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.054 [2024-10-01 14:37:53.655122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.054 [2024-10-01 14:37:53.656999] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:02.054 [2024-10-01 14:37:53.657046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:02.054 [2024-10-01 14:37:53.657055] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:02.054 [2024-10-01 14:37:53.657067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:02.054 [2024-10-01 14:37:53.657074] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:02.054 [2024-10-01 14:37:53.657082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:02.054 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.055 "name": "Existed_Raid", 00:13:02.055 "uuid": "2ddf8e2c-d8ad-4105-9c11-830890bf5847", 00:13:02.055 "strip_size_kb": 64, 00:13:02.055 "state": "configuring", 00:13:02.055 "raid_level": "raid5f", 00:13:02.055 "superblock": true, 00:13:02.055 "num_base_bdevs": 4, 00:13:02.055 "num_base_bdevs_discovered": 1, 00:13:02.055 "num_base_bdevs_operational": 4, 00:13:02.055 "base_bdevs_list": [ 00:13:02.055 { 00:13:02.055 "name": "BaseBdev1", 00:13:02.055 "uuid": "73a9bb82-6d41-4504-bdf8-b474ca28c706", 00:13:02.055 "is_configured": true, 00:13:02.055 "data_offset": 2048, 00:13:02.055 "data_size": 63488 00:13:02.055 }, 00:13:02.055 { 00:13:02.055 "name": "BaseBdev2", 00:13:02.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.055 "is_configured": false, 00:13:02.055 "data_offset": 0, 00:13:02.055 "data_size": 0 00:13:02.055 }, 00:13:02.055 { 00:13:02.055 "name": "BaseBdev3", 00:13:02.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.055 "is_configured": false, 00:13:02.055 "data_offset": 0, 00:13:02.055 "data_size": 0 00:13:02.055 }, 00:13:02.055 { 00:13:02.055 "name": "BaseBdev4", 00:13:02.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.055 "is_configured": false, 00:13:02.055 "data_offset": 0, 00:13:02.055 "data_size": 0 00:13:02.055 } 00:13:02.055 ] 00:13:02.055 }' 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.055 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.315 14:37:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:02.315 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.315 14:37:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.577 [2024-10-01 14:37:54.006136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.577 BaseBdev2 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.577 [ 00:13:02.577 { 00:13:02.577 "name": "BaseBdev2", 00:13:02.577 "aliases": [ 00:13:02.577 "142d656d-b7c4-4a0e-8ad4-c39fb348f650" 00:13:02.577 ], 00:13:02.577 "product_name": "Malloc disk", 00:13:02.577 "block_size": 512, 00:13:02.577 "num_blocks": 65536, 00:13:02.577 "uuid": "142d656d-b7c4-4a0e-8ad4-c39fb348f650", 00:13:02.577 "assigned_rate_limits": { 00:13:02.577 "rw_ios_per_sec": 0, 00:13:02.577 "rw_mbytes_per_sec": 0, 00:13:02.577 "r_mbytes_per_sec": 0, 00:13:02.577 "w_mbytes_per_sec": 0 00:13:02.577 }, 00:13:02.577 "claimed": true, 00:13:02.577 "claim_type": "exclusive_write", 00:13:02.577 "zoned": false, 00:13:02.577 "supported_io_types": { 00:13:02.577 "read": true, 00:13:02.577 "write": true, 00:13:02.577 "unmap": true, 00:13:02.577 "flush": true, 00:13:02.577 "reset": true, 00:13:02.577 "nvme_admin": false, 00:13:02.577 "nvme_io": false, 00:13:02.577 "nvme_io_md": false, 00:13:02.577 "write_zeroes": true, 00:13:02.577 "zcopy": true, 00:13:02.577 "get_zone_info": false, 00:13:02.577 "zone_management": false, 00:13:02.577 "zone_append": false, 00:13:02.577 "compare": false, 00:13:02.577 "compare_and_write": false, 00:13:02.577 "abort": true, 00:13:02.577 "seek_hole": false, 00:13:02.577 "seek_data": false, 00:13:02.577 "copy": true, 00:13:02.577 "nvme_iov_md": false 00:13:02.577 }, 00:13:02.577 "memory_domains": [ 00:13:02.577 { 00:13:02.577 "dma_device_id": "system", 00:13:02.577 "dma_device_type": 1 00:13:02.577 }, 00:13:02.577 { 00:13:02.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.577 "dma_device_type": 2 00:13:02.577 } 00:13:02.577 ], 00:13:02.577 "driver_specific": {} 00:13:02.577 } 00:13:02.577 ] 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.577 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.577 "name": "Existed_Raid", 00:13:02.577 "uuid": "2ddf8e2c-d8ad-4105-9c11-830890bf5847", 00:13:02.577 "strip_size_kb": 64, 00:13:02.577 "state": "configuring", 00:13:02.577 "raid_level": "raid5f", 00:13:02.577 "superblock": true, 00:13:02.577 "num_base_bdevs": 4, 00:13:02.577 "num_base_bdevs_discovered": 2, 00:13:02.577 "num_base_bdevs_operational": 4, 00:13:02.577 "base_bdevs_list": [ 00:13:02.577 { 00:13:02.577 "name": "BaseBdev1", 00:13:02.577 "uuid": "73a9bb82-6d41-4504-bdf8-b474ca28c706", 00:13:02.577 "is_configured": true, 00:13:02.577 "data_offset": 2048, 00:13:02.577 "data_size": 63488 00:13:02.577 }, 00:13:02.577 { 00:13:02.577 "name": "BaseBdev2", 00:13:02.577 "uuid": "142d656d-b7c4-4a0e-8ad4-c39fb348f650", 00:13:02.577 "is_configured": true, 00:13:02.577 "data_offset": 2048, 00:13:02.577 "data_size": 63488 00:13:02.577 }, 00:13:02.577 { 00:13:02.577 "name": "BaseBdev3", 00:13:02.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.577 "is_configured": false, 00:13:02.577 "data_offset": 0, 00:13:02.577 "data_size": 0 00:13:02.577 }, 00:13:02.577 { 00:13:02.577 "name": "BaseBdev4", 00:13:02.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.577 "is_configured": false, 00:13:02.577 "data_offset": 0, 00:13:02.577 "data_size": 0 00:13:02.577 } 00:13:02.578 ] 00:13:02.578 }' 00:13:02.578 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.578 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.839 [2024-10-01 14:37:54.389462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:02.839 BaseBdev3 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.839 [ 00:13:02.839 { 00:13:02.839 "name": "BaseBdev3", 00:13:02.839 "aliases": [ 00:13:02.839 "edd29106-1cc2-4e23-9071-fcf0d9c5e1a5" 00:13:02.839 ], 00:13:02.839 "product_name": "Malloc disk", 00:13:02.839 "block_size": 512, 00:13:02.839 "num_blocks": 65536, 00:13:02.839 "uuid": "edd29106-1cc2-4e23-9071-fcf0d9c5e1a5", 00:13:02.839 "assigned_rate_limits": { 00:13:02.839 "rw_ios_per_sec": 0, 00:13:02.839 "rw_mbytes_per_sec": 0, 00:13:02.839 "r_mbytes_per_sec": 0, 00:13:02.839 "w_mbytes_per_sec": 0 00:13:02.839 }, 00:13:02.839 "claimed": true, 00:13:02.839 "claim_type": "exclusive_write", 00:13:02.839 "zoned": false, 00:13:02.839 "supported_io_types": { 00:13:02.839 "read": true, 00:13:02.839 "write": true, 00:13:02.839 "unmap": true, 00:13:02.839 "flush": true, 00:13:02.839 "reset": true, 00:13:02.839 "nvme_admin": false, 00:13:02.839 "nvme_io": false, 00:13:02.839 "nvme_io_md": false, 00:13:02.839 "write_zeroes": true, 00:13:02.839 "zcopy": true, 00:13:02.839 "get_zone_info": false, 00:13:02.839 "zone_management": false, 00:13:02.839 "zone_append": false, 00:13:02.839 "compare": false, 00:13:02.839 "compare_and_write": false, 00:13:02.839 "abort": true, 00:13:02.839 "seek_hole": false, 00:13:02.839 "seek_data": false, 00:13:02.839 "copy": true, 00:13:02.839 "nvme_iov_md": false 00:13:02.839 }, 00:13:02.839 "memory_domains": [ 00:13:02.839 { 00:13:02.839 "dma_device_id": "system", 00:13:02.839 "dma_device_type": 1 00:13:02.839 }, 00:13:02.839 { 00:13:02.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.839 "dma_device_type": 2 00:13:02.839 } 00:13:02.839 ], 00:13:02.839 "driver_specific": {} 00:13:02.839 } 00:13:02.839 ] 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.839 "name": "Existed_Raid", 00:13:02.839 "uuid": "2ddf8e2c-d8ad-4105-9c11-830890bf5847", 00:13:02.839 "strip_size_kb": 64, 00:13:02.839 "state": "configuring", 00:13:02.839 "raid_level": "raid5f", 00:13:02.839 "superblock": true, 00:13:02.839 "num_base_bdevs": 4, 00:13:02.839 "num_base_bdevs_discovered": 3, 00:13:02.839 "num_base_bdevs_operational": 4, 00:13:02.839 "base_bdevs_list": [ 00:13:02.839 { 00:13:02.839 "name": "BaseBdev1", 00:13:02.839 "uuid": "73a9bb82-6d41-4504-bdf8-b474ca28c706", 00:13:02.839 "is_configured": true, 00:13:02.839 "data_offset": 2048, 00:13:02.839 "data_size": 63488 00:13:02.839 }, 00:13:02.839 { 00:13:02.839 "name": "BaseBdev2", 00:13:02.839 "uuid": "142d656d-b7c4-4a0e-8ad4-c39fb348f650", 00:13:02.839 "is_configured": true, 00:13:02.839 "data_offset": 2048, 00:13:02.839 "data_size": 63488 00:13:02.839 }, 00:13:02.839 { 00:13:02.839 "name": "BaseBdev3", 00:13:02.839 "uuid": "edd29106-1cc2-4e23-9071-fcf0d9c5e1a5", 00:13:02.839 "is_configured": true, 00:13:02.839 "data_offset": 2048, 00:13:02.839 "data_size": 63488 00:13:02.839 }, 00:13:02.839 { 00:13:02.839 "name": "BaseBdev4", 00:13:02.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.839 "is_configured": false, 00:13:02.839 "data_offset": 0, 00:13:02.839 "data_size": 0 00:13:02.839 } 00:13:02.839 ] 00:13:02.839 }' 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.839 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.099 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:03.099 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.099 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.099 [2024-10-01 14:37:54.760551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:03.099 [2024-10-01 14:37:54.760806] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:03.099 [2024-10-01 14:37:54.760829] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:03.099 BaseBdev4 00:13:03.099 [2024-10-01 14:37:54.761081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:03.099 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.099 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:03.099 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:03.099 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:03.099 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:03.099 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:03.099 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:03.099 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:03.099 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.099 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.099 [2024-10-01 14:37:54.766046] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:03.100 [2024-10-01 14:37:54.766073] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:03.100 [2024-10-01 14:37:54.766319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.100 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.100 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:03.100 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.100 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.100 [ 00:13:03.100 { 00:13:03.100 "name": "BaseBdev4", 00:13:03.100 "aliases": [ 00:13:03.100 "8ac5d508-e799-45ae-a21c-1e405ae34ac2" 00:13:03.100 ], 00:13:03.100 "product_name": "Malloc disk", 00:13:03.100 "block_size": 512, 00:13:03.100 "num_blocks": 65536, 00:13:03.100 "uuid": "8ac5d508-e799-45ae-a21c-1e405ae34ac2", 00:13:03.100 "assigned_rate_limits": { 00:13:03.100 "rw_ios_per_sec": 0, 00:13:03.100 "rw_mbytes_per_sec": 0, 00:13:03.100 "r_mbytes_per_sec": 0, 00:13:03.100 "w_mbytes_per_sec": 0 00:13:03.100 }, 00:13:03.100 "claimed": true, 00:13:03.100 "claim_type": "exclusive_write", 00:13:03.100 "zoned": false, 00:13:03.100 "supported_io_types": { 00:13:03.100 "read": true, 00:13:03.100 "write": true, 00:13:03.100 "unmap": true, 00:13:03.100 "flush": true, 00:13:03.100 "reset": true, 00:13:03.100 "nvme_admin": false, 00:13:03.361 "nvme_io": false, 00:13:03.361 "nvme_io_md": false, 00:13:03.361 "write_zeroes": true, 00:13:03.361 "zcopy": true, 00:13:03.361 "get_zone_info": false, 00:13:03.361 "zone_management": false, 00:13:03.361 "zone_append": false, 00:13:03.361 "compare": false, 00:13:03.361 "compare_and_write": false, 00:13:03.361 "abort": true, 00:13:03.361 "seek_hole": false, 00:13:03.361 "seek_data": false, 00:13:03.361 "copy": true, 00:13:03.361 "nvme_iov_md": false 00:13:03.361 }, 00:13:03.361 "memory_domains": [ 00:13:03.361 { 00:13:03.361 "dma_device_id": "system", 00:13:03.361 "dma_device_type": 1 00:13:03.361 }, 00:13:03.361 { 00:13:03.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.361 "dma_device_type": 2 00:13:03.361 } 00:13:03.361 ], 00:13:03.361 "driver_specific": {} 00:13:03.361 } 00:13:03.361 ] 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.361 "name": "Existed_Raid", 00:13:03.361 "uuid": "2ddf8e2c-d8ad-4105-9c11-830890bf5847", 00:13:03.361 "strip_size_kb": 64, 00:13:03.361 "state": "online", 00:13:03.361 "raid_level": "raid5f", 00:13:03.361 "superblock": true, 00:13:03.361 "num_base_bdevs": 4, 00:13:03.361 "num_base_bdevs_discovered": 4, 00:13:03.361 "num_base_bdevs_operational": 4, 00:13:03.361 "base_bdevs_list": [ 00:13:03.361 { 00:13:03.361 "name": "BaseBdev1", 00:13:03.361 "uuid": "73a9bb82-6d41-4504-bdf8-b474ca28c706", 00:13:03.361 "is_configured": true, 00:13:03.361 "data_offset": 2048, 00:13:03.361 "data_size": 63488 00:13:03.361 }, 00:13:03.361 { 00:13:03.361 "name": "BaseBdev2", 00:13:03.361 "uuid": "142d656d-b7c4-4a0e-8ad4-c39fb348f650", 00:13:03.361 "is_configured": true, 00:13:03.361 "data_offset": 2048, 00:13:03.361 "data_size": 63488 00:13:03.361 }, 00:13:03.361 { 00:13:03.361 "name": "BaseBdev3", 00:13:03.361 "uuid": "edd29106-1cc2-4e23-9071-fcf0d9c5e1a5", 00:13:03.361 "is_configured": true, 00:13:03.361 "data_offset": 2048, 00:13:03.361 "data_size": 63488 00:13:03.361 }, 00:13:03.361 { 00:13:03.361 "name": "BaseBdev4", 00:13:03.361 "uuid": "8ac5d508-e799-45ae-a21c-1e405ae34ac2", 00:13:03.361 "is_configured": true, 00:13:03.361 "data_offset": 2048, 00:13:03.361 "data_size": 63488 00:13:03.361 } 00:13:03.361 ] 00:13:03.361 }' 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.361 14:37:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.622 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:03.622 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:03.622 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:03.622 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:03.622 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:03.622 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:03.622 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:03.622 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:03.622 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.622 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.622 [2024-10-01 14:37:55.092078] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.622 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.622 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:03.622 "name": "Existed_Raid", 00:13:03.622 "aliases": [ 00:13:03.622 "2ddf8e2c-d8ad-4105-9c11-830890bf5847" 00:13:03.622 ], 00:13:03.622 "product_name": "Raid Volume", 00:13:03.622 "block_size": 512, 00:13:03.622 "num_blocks": 190464, 00:13:03.622 "uuid": "2ddf8e2c-d8ad-4105-9c11-830890bf5847", 00:13:03.622 "assigned_rate_limits": { 00:13:03.622 "rw_ios_per_sec": 0, 00:13:03.622 "rw_mbytes_per_sec": 0, 00:13:03.622 "r_mbytes_per_sec": 0, 00:13:03.622 "w_mbytes_per_sec": 0 00:13:03.622 }, 00:13:03.622 "claimed": false, 00:13:03.622 "zoned": false, 00:13:03.622 "supported_io_types": { 00:13:03.622 "read": true, 00:13:03.622 "write": true, 00:13:03.622 "unmap": false, 00:13:03.622 "flush": false, 00:13:03.622 "reset": true, 00:13:03.622 "nvme_admin": false, 00:13:03.622 "nvme_io": false, 00:13:03.622 "nvme_io_md": false, 00:13:03.622 "write_zeroes": true, 00:13:03.622 "zcopy": false, 00:13:03.622 "get_zone_info": false, 00:13:03.622 "zone_management": false, 00:13:03.622 "zone_append": false, 00:13:03.622 "compare": false, 00:13:03.622 "compare_and_write": false, 00:13:03.622 "abort": false, 00:13:03.622 "seek_hole": false, 00:13:03.622 "seek_data": false, 00:13:03.622 "copy": false, 00:13:03.622 "nvme_iov_md": false 00:13:03.622 }, 00:13:03.622 "driver_specific": { 00:13:03.622 "raid": { 00:13:03.622 "uuid": "2ddf8e2c-d8ad-4105-9c11-830890bf5847", 00:13:03.622 "strip_size_kb": 64, 00:13:03.622 "state": "online", 00:13:03.622 "raid_level": "raid5f", 00:13:03.622 "superblock": true, 00:13:03.622 "num_base_bdevs": 4, 00:13:03.622 "num_base_bdevs_discovered": 4, 00:13:03.622 "num_base_bdevs_operational": 4, 00:13:03.622 "base_bdevs_list": [ 00:13:03.622 { 00:13:03.622 "name": "BaseBdev1", 00:13:03.622 "uuid": "73a9bb82-6d41-4504-bdf8-b474ca28c706", 00:13:03.622 "is_configured": true, 00:13:03.623 "data_offset": 2048, 00:13:03.623 "data_size": 63488 00:13:03.623 }, 00:13:03.623 { 00:13:03.623 "name": "BaseBdev2", 00:13:03.623 "uuid": "142d656d-b7c4-4a0e-8ad4-c39fb348f650", 00:13:03.623 "is_configured": true, 00:13:03.623 "data_offset": 2048, 00:13:03.623 "data_size": 63488 00:13:03.623 }, 00:13:03.623 { 00:13:03.623 "name": "BaseBdev3", 00:13:03.623 "uuid": "edd29106-1cc2-4e23-9071-fcf0d9c5e1a5", 00:13:03.623 "is_configured": true, 00:13:03.623 "data_offset": 2048, 00:13:03.623 "data_size": 63488 00:13:03.623 }, 00:13:03.623 { 00:13:03.623 "name": "BaseBdev4", 00:13:03.623 "uuid": "8ac5d508-e799-45ae-a21c-1e405ae34ac2", 00:13:03.623 "is_configured": true, 00:13:03.623 "data_offset": 2048, 00:13:03.623 "data_size": 63488 00:13:03.623 } 00:13:03.623 ] 00:13:03.623 } 00:13:03.623 } 00:13:03.623 }' 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:03.623 BaseBdev2 00:13:03.623 BaseBdev3 00:13:03.623 BaseBdev4' 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.623 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.884 [2024-10-01 14:37:55.316003] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.884 "name": "Existed_Raid", 00:13:03.884 "uuid": "2ddf8e2c-d8ad-4105-9c11-830890bf5847", 00:13:03.884 "strip_size_kb": 64, 00:13:03.884 "state": "online", 00:13:03.884 "raid_level": "raid5f", 00:13:03.884 "superblock": true, 00:13:03.884 "num_base_bdevs": 4, 00:13:03.884 "num_base_bdevs_discovered": 3, 00:13:03.884 "num_base_bdevs_operational": 3, 00:13:03.884 "base_bdevs_list": [ 00:13:03.884 { 00:13:03.884 "name": null, 00:13:03.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.884 "is_configured": false, 00:13:03.884 "data_offset": 0, 00:13:03.884 "data_size": 63488 00:13:03.884 }, 00:13:03.884 { 00:13:03.884 "name": "BaseBdev2", 00:13:03.884 "uuid": "142d656d-b7c4-4a0e-8ad4-c39fb348f650", 00:13:03.884 "is_configured": true, 00:13:03.884 "data_offset": 2048, 00:13:03.884 "data_size": 63488 00:13:03.884 }, 00:13:03.884 { 00:13:03.884 "name": "BaseBdev3", 00:13:03.884 "uuid": "edd29106-1cc2-4e23-9071-fcf0d9c5e1a5", 00:13:03.884 "is_configured": true, 00:13:03.884 "data_offset": 2048, 00:13:03.884 "data_size": 63488 00:13:03.884 }, 00:13:03.884 { 00:13:03.884 "name": "BaseBdev4", 00:13:03.884 "uuid": "8ac5d508-e799-45ae-a21c-1e405ae34ac2", 00:13:03.884 "is_configured": true, 00:13:03.884 "data_offset": 2048, 00:13:03.884 "data_size": 63488 00:13:03.884 } 00:13:03.884 ] 00:13:03.884 }' 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.884 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.144 [2024-10-01 14:37:55.739660] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:04.144 [2024-10-01 14:37:55.739848] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.144 [2024-10-01 14:37:55.799824] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:04.144 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.145 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.145 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.145 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.145 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:04.145 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.406 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.407 [2024-10-01 14:37:55.839889] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.407 14:37:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.407 [2024-10-01 14:37:55.951297] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:04.407 [2024-10-01 14:37:55.951358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.407 BaseBdev2 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.407 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.668 [ 00:13:04.668 { 00:13:04.668 "name": "BaseBdev2", 00:13:04.668 "aliases": [ 00:13:04.668 "90732e38-2d9a-43b0-8b83-78f69b636edf" 00:13:04.668 ], 00:13:04.668 "product_name": "Malloc disk", 00:13:04.668 "block_size": 512, 00:13:04.668 "num_blocks": 65536, 00:13:04.668 "uuid": "90732e38-2d9a-43b0-8b83-78f69b636edf", 00:13:04.668 "assigned_rate_limits": { 00:13:04.668 "rw_ios_per_sec": 0, 00:13:04.668 "rw_mbytes_per_sec": 0, 00:13:04.668 "r_mbytes_per_sec": 0, 00:13:04.668 "w_mbytes_per_sec": 0 00:13:04.668 }, 00:13:04.668 "claimed": false, 00:13:04.668 "zoned": false, 00:13:04.668 "supported_io_types": { 00:13:04.668 "read": true, 00:13:04.668 "write": true, 00:13:04.668 "unmap": true, 00:13:04.668 "flush": true, 00:13:04.668 "reset": true, 00:13:04.668 "nvme_admin": false, 00:13:04.668 "nvme_io": false, 00:13:04.668 "nvme_io_md": false, 00:13:04.668 "write_zeroes": true, 00:13:04.668 "zcopy": true, 00:13:04.668 "get_zone_info": false, 00:13:04.668 "zone_management": false, 00:13:04.668 "zone_append": false, 00:13:04.668 "compare": false, 00:13:04.668 "compare_and_write": false, 00:13:04.668 "abort": true, 00:13:04.668 "seek_hole": false, 00:13:04.668 "seek_data": false, 00:13:04.668 "copy": true, 00:13:04.668 "nvme_iov_md": false 00:13:04.668 }, 00:13:04.668 "memory_domains": [ 00:13:04.668 { 00:13:04.668 "dma_device_id": "system", 00:13:04.668 "dma_device_type": 1 00:13:04.668 }, 00:13:04.668 { 00:13:04.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.668 "dma_device_type": 2 00:13:04.668 } 00:13:04.668 ], 00:13:04.668 "driver_specific": {} 00:13:04.668 } 00:13:04.668 ] 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.668 BaseBdev3 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.668 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.668 [ 00:13:04.668 { 00:13:04.668 "name": "BaseBdev3", 00:13:04.668 "aliases": [ 00:13:04.668 "9bd65a74-f3b5-4319-ab3d-677261308b4c" 00:13:04.668 ], 00:13:04.668 "product_name": "Malloc disk", 00:13:04.668 "block_size": 512, 00:13:04.668 "num_blocks": 65536, 00:13:04.668 "uuid": "9bd65a74-f3b5-4319-ab3d-677261308b4c", 00:13:04.668 "assigned_rate_limits": { 00:13:04.668 "rw_ios_per_sec": 0, 00:13:04.668 "rw_mbytes_per_sec": 0, 00:13:04.668 "r_mbytes_per_sec": 0, 00:13:04.668 "w_mbytes_per_sec": 0 00:13:04.668 }, 00:13:04.668 "claimed": false, 00:13:04.668 "zoned": false, 00:13:04.668 "supported_io_types": { 00:13:04.668 "read": true, 00:13:04.668 "write": true, 00:13:04.668 "unmap": true, 00:13:04.668 "flush": true, 00:13:04.668 "reset": true, 00:13:04.668 "nvme_admin": false, 00:13:04.668 "nvme_io": false, 00:13:04.668 "nvme_io_md": false, 00:13:04.668 "write_zeroes": true, 00:13:04.668 "zcopy": true, 00:13:04.668 "get_zone_info": false, 00:13:04.668 "zone_management": false, 00:13:04.668 "zone_append": false, 00:13:04.668 "compare": false, 00:13:04.668 "compare_and_write": false, 00:13:04.668 "abort": true, 00:13:04.668 "seek_hole": false, 00:13:04.668 "seek_data": false, 00:13:04.668 "copy": true, 00:13:04.668 "nvme_iov_md": false 00:13:04.668 }, 00:13:04.668 "memory_domains": [ 00:13:04.668 { 00:13:04.668 "dma_device_id": "system", 00:13:04.669 "dma_device_type": 1 00:13:04.669 }, 00:13:04.669 { 00:13:04.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.669 "dma_device_type": 2 00:13:04.669 } 00:13:04.669 ], 00:13:04.669 "driver_specific": {} 00:13:04.669 } 00:13:04.669 ] 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.669 BaseBdev4 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.669 [ 00:13:04.669 { 00:13:04.669 "name": "BaseBdev4", 00:13:04.669 "aliases": [ 00:13:04.669 "d45e1eed-6f79-490a-9d89-d4ffe1f4bad8" 00:13:04.669 ], 00:13:04.669 "product_name": "Malloc disk", 00:13:04.669 "block_size": 512, 00:13:04.669 "num_blocks": 65536, 00:13:04.669 "uuid": "d45e1eed-6f79-490a-9d89-d4ffe1f4bad8", 00:13:04.669 "assigned_rate_limits": { 00:13:04.669 "rw_ios_per_sec": 0, 00:13:04.669 "rw_mbytes_per_sec": 0, 00:13:04.669 "r_mbytes_per_sec": 0, 00:13:04.669 "w_mbytes_per_sec": 0 00:13:04.669 }, 00:13:04.669 "claimed": false, 00:13:04.669 "zoned": false, 00:13:04.669 "supported_io_types": { 00:13:04.669 "read": true, 00:13:04.669 "write": true, 00:13:04.669 "unmap": true, 00:13:04.669 "flush": true, 00:13:04.669 "reset": true, 00:13:04.669 "nvme_admin": false, 00:13:04.669 "nvme_io": false, 00:13:04.669 "nvme_io_md": false, 00:13:04.669 "write_zeroes": true, 00:13:04.669 "zcopy": true, 00:13:04.669 "get_zone_info": false, 00:13:04.669 "zone_management": false, 00:13:04.669 "zone_append": false, 00:13:04.669 "compare": false, 00:13:04.669 "compare_and_write": false, 00:13:04.669 "abort": true, 00:13:04.669 "seek_hole": false, 00:13:04.669 "seek_data": false, 00:13:04.669 "copy": true, 00:13:04.669 "nvme_iov_md": false 00:13:04.669 }, 00:13:04.669 "memory_domains": [ 00:13:04.669 { 00:13:04.669 "dma_device_id": "system", 00:13:04.669 "dma_device_type": 1 00:13:04.669 }, 00:13:04.669 { 00:13:04.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.669 "dma_device_type": 2 00:13:04.669 } 00:13:04.669 ], 00:13:04.669 "driver_specific": {} 00:13:04.669 } 00:13:04.669 ] 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.669 [2024-10-01 14:37:56.208817] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:04.669 [2024-10-01 14:37:56.208875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:04.669 [2024-10-01 14:37:56.208899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.669 [2024-10-01 14:37:56.210789] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.669 [2024-10-01 14:37:56.210844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.669 "name": "Existed_Raid", 00:13:04.669 "uuid": "c9520c36-0d2e-4ad9-aa1b-3389051cff40", 00:13:04.669 "strip_size_kb": 64, 00:13:04.669 "state": "configuring", 00:13:04.669 "raid_level": "raid5f", 00:13:04.669 "superblock": true, 00:13:04.669 "num_base_bdevs": 4, 00:13:04.669 "num_base_bdevs_discovered": 3, 00:13:04.669 "num_base_bdevs_operational": 4, 00:13:04.669 "base_bdevs_list": [ 00:13:04.669 { 00:13:04.669 "name": "BaseBdev1", 00:13:04.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.669 "is_configured": false, 00:13:04.669 "data_offset": 0, 00:13:04.669 "data_size": 0 00:13:04.669 }, 00:13:04.669 { 00:13:04.669 "name": "BaseBdev2", 00:13:04.669 "uuid": "90732e38-2d9a-43b0-8b83-78f69b636edf", 00:13:04.669 "is_configured": true, 00:13:04.669 "data_offset": 2048, 00:13:04.669 "data_size": 63488 00:13:04.669 }, 00:13:04.669 { 00:13:04.669 "name": "BaseBdev3", 00:13:04.669 "uuid": "9bd65a74-f3b5-4319-ab3d-677261308b4c", 00:13:04.669 "is_configured": true, 00:13:04.669 "data_offset": 2048, 00:13:04.669 "data_size": 63488 00:13:04.669 }, 00:13:04.669 { 00:13:04.669 "name": "BaseBdev4", 00:13:04.669 "uuid": "d45e1eed-6f79-490a-9d89-d4ffe1f4bad8", 00:13:04.669 "is_configured": true, 00:13:04.669 "data_offset": 2048, 00:13:04.669 "data_size": 63488 00:13:04.669 } 00:13:04.669 ] 00:13:04.669 }' 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.669 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.930 [2024-10-01 14:37:56.556880] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.930 "name": "Existed_Raid", 00:13:04.930 "uuid": "c9520c36-0d2e-4ad9-aa1b-3389051cff40", 00:13:04.930 "strip_size_kb": 64, 00:13:04.930 "state": "configuring", 00:13:04.930 "raid_level": "raid5f", 00:13:04.930 "superblock": true, 00:13:04.930 "num_base_bdevs": 4, 00:13:04.930 "num_base_bdevs_discovered": 2, 00:13:04.930 "num_base_bdevs_operational": 4, 00:13:04.930 "base_bdevs_list": [ 00:13:04.930 { 00:13:04.930 "name": "BaseBdev1", 00:13:04.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.930 "is_configured": false, 00:13:04.930 "data_offset": 0, 00:13:04.930 "data_size": 0 00:13:04.930 }, 00:13:04.930 { 00:13:04.930 "name": null, 00:13:04.930 "uuid": "90732e38-2d9a-43b0-8b83-78f69b636edf", 00:13:04.930 "is_configured": false, 00:13:04.930 "data_offset": 0, 00:13:04.930 "data_size": 63488 00:13:04.930 }, 00:13:04.930 { 00:13:04.930 "name": "BaseBdev3", 00:13:04.930 "uuid": "9bd65a74-f3b5-4319-ab3d-677261308b4c", 00:13:04.930 "is_configured": true, 00:13:04.930 "data_offset": 2048, 00:13:04.930 "data_size": 63488 00:13:04.930 }, 00:13:04.930 { 00:13:04.930 "name": "BaseBdev4", 00:13:04.930 "uuid": "d45e1eed-6f79-490a-9d89-d4ffe1f4bad8", 00:13:04.930 "is_configured": true, 00:13:04.930 "data_offset": 2048, 00:13:04.930 "data_size": 63488 00:13:04.930 } 00:13:04.930 ] 00:13:04.930 }' 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.930 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.502 [2024-10-01 14:37:56.963751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.502 BaseBdev1 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.502 [ 00:13:05.502 { 00:13:05.502 "name": "BaseBdev1", 00:13:05.502 "aliases": [ 00:13:05.502 "ee553bc7-84ce-4718-aea3-30b26c48c114" 00:13:05.502 ], 00:13:05.502 "product_name": "Malloc disk", 00:13:05.502 "block_size": 512, 00:13:05.502 "num_blocks": 65536, 00:13:05.502 "uuid": "ee553bc7-84ce-4718-aea3-30b26c48c114", 00:13:05.502 "assigned_rate_limits": { 00:13:05.502 "rw_ios_per_sec": 0, 00:13:05.502 "rw_mbytes_per_sec": 0, 00:13:05.502 "r_mbytes_per_sec": 0, 00:13:05.502 "w_mbytes_per_sec": 0 00:13:05.502 }, 00:13:05.502 "claimed": true, 00:13:05.502 "claim_type": "exclusive_write", 00:13:05.502 "zoned": false, 00:13:05.502 "supported_io_types": { 00:13:05.502 "read": true, 00:13:05.502 "write": true, 00:13:05.502 "unmap": true, 00:13:05.502 "flush": true, 00:13:05.502 "reset": true, 00:13:05.502 "nvme_admin": false, 00:13:05.502 "nvme_io": false, 00:13:05.502 "nvme_io_md": false, 00:13:05.502 "write_zeroes": true, 00:13:05.502 "zcopy": true, 00:13:05.502 "get_zone_info": false, 00:13:05.502 "zone_management": false, 00:13:05.502 "zone_append": false, 00:13:05.502 "compare": false, 00:13:05.502 "compare_and_write": false, 00:13:05.502 "abort": true, 00:13:05.502 "seek_hole": false, 00:13:05.502 "seek_data": false, 00:13:05.502 "copy": true, 00:13:05.502 "nvme_iov_md": false 00:13:05.502 }, 00:13:05.502 "memory_domains": [ 00:13:05.502 { 00:13:05.502 "dma_device_id": "system", 00:13:05.502 "dma_device_type": 1 00:13:05.502 }, 00:13:05.502 { 00:13:05.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.502 "dma_device_type": 2 00:13:05.502 } 00:13:05.502 ], 00:13:05.502 "driver_specific": {} 00:13:05.502 } 00:13:05.502 ] 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.502 14:37:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.502 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.502 "name": "Existed_Raid", 00:13:05.502 "uuid": "c9520c36-0d2e-4ad9-aa1b-3389051cff40", 00:13:05.502 "strip_size_kb": 64, 00:13:05.502 "state": "configuring", 00:13:05.502 "raid_level": "raid5f", 00:13:05.502 "superblock": true, 00:13:05.502 "num_base_bdevs": 4, 00:13:05.502 "num_base_bdevs_discovered": 3, 00:13:05.502 "num_base_bdevs_operational": 4, 00:13:05.502 "base_bdevs_list": [ 00:13:05.502 { 00:13:05.502 "name": "BaseBdev1", 00:13:05.502 "uuid": "ee553bc7-84ce-4718-aea3-30b26c48c114", 00:13:05.502 "is_configured": true, 00:13:05.502 "data_offset": 2048, 00:13:05.502 "data_size": 63488 00:13:05.502 }, 00:13:05.502 { 00:13:05.503 "name": null, 00:13:05.503 "uuid": "90732e38-2d9a-43b0-8b83-78f69b636edf", 00:13:05.503 "is_configured": false, 00:13:05.503 "data_offset": 0, 00:13:05.503 "data_size": 63488 00:13:05.503 }, 00:13:05.503 { 00:13:05.503 "name": "BaseBdev3", 00:13:05.503 "uuid": "9bd65a74-f3b5-4319-ab3d-677261308b4c", 00:13:05.503 "is_configured": true, 00:13:05.503 "data_offset": 2048, 00:13:05.503 "data_size": 63488 00:13:05.503 }, 00:13:05.503 { 00:13:05.503 "name": "BaseBdev4", 00:13:05.503 "uuid": "d45e1eed-6f79-490a-9d89-d4ffe1f4bad8", 00:13:05.503 "is_configured": true, 00:13:05.503 "data_offset": 2048, 00:13:05.503 "data_size": 63488 00:13:05.503 } 00:13:05.503 ] 00:13:05.503 }' 00:13:05.503 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.503 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.764 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.764 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.764 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.764 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:05.764 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.764 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:05.764 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:05.764 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.764 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.764 [2024-10-01 14:37:57.335906] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:05.764 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.764 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:05.764 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.764 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.765 "name": "Existed_Raid", 00:13:05.765 "uuid": "c9520c36-0d2e-4ad9-aa1b-3389051cff40", 00:13:05.765 "strip_size_kb": 64, 00:13:05.765 "state": "configuring", 00:13:05.765 "raid_level": "raid5f", 00:13:05.765 "superblock": true, 00:13:05.765 "num_base_bdevs": 4, 00:13:05.765 "num_base_bdevs_discovered": 2, 00:13:05.765 "num_base_bdevs_operational": 4, 00:13:05.765 "base_bdevs_list": [ 00:13:05.765 { 00:13:05.765 "name": "BaseBdev1", 00:13:05.765 "uuid": "ee553bc7-84ce-4718-aea3-30b26c48c114", 00:13:05.765 "is_configured": true, 00:13:05.765 "data_offset": 2048, 00:13:05.765 "data_size": 63488 00:13:05.765 }, 00:13:05.765 { 00:13:05.765 "name": null, 00:13:05.765 "uuid": "90732e38-2d9a-43b0-8b83-78f69b636edf", 00:13:05.765 "is_configured": false, 00:13:05.765 "data_offset": 0, 00:13:05.765 "data_size": 63488 00:13:05.765 }, 00:13:05.765 { 00:13:05.765 "name": null, 00:13:05.765 "uuid": "9bd65a74-f3b5-4319-ab3d-677261308b4c", 00:13:05.765 "is_configured": false, 00:13:05.765 "data_offset": 0, 00:13:05.765 "data_size": 63488 00:13:05.765 }, 00:13:05.765 { 00:13:05.765 "name": "BaseBdev4", 00:13:05.765 "uuid": "d45e1eed-6f79-490a-9d89-d4ffe1f4bad8", 00:13:05.765 "is_configured": true, 00:13:05.765 "data_offset": 2048, 00:13:05.765 "data_size": 63488 00:13:05.765 } 00:13:05.765 ] 00:13:05.765 }' 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.765 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.026 [2024-10-01 14:37:57.696089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.026 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.286 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.286 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.286 "name": "Existed_Raid", 00:13:06.286 "uuid": "c9520c36-0d2e-4ad9-aa1b-3389051cff40", 00:13:06.286 "strip_size_kb": 64, 00:13:06.286 "state": "configuring", 00:13:06.286 "raid_level": "raid5f", 00:13:06.286 "superblock": true, 00:13:06.286 "num_base_bdevs": 4, 00:13:06.286 "num_base_bdevs_discovered": 3, 00:13:06.286 "num_base_bdevs_operational": 4, 00:13:06.286 "base_bdevs_list": [ 00:13:06.286 { 00:13:06.286 "name": "BaseBdev1", 00:13:06.286 "uuid": "ee553bc7-84ce-4718-aea3-30b26c48c114", 00:13:06.286 "is_configured": true, 00:13:06.286 "data_offset": 2048, 00:13:06.286 "data_size": 63488 00:13:06.286 }, 00:13:06.286 { 00:13:06.286 "name": null, 00:13:06.286 "uuid": "90732e38-2d9a-43b0-8b83-78f69b636edf", 00:13:06.286 "is_configured": false, 00:13:06.286 "data_offset": 0, 00:13:06.286 "data_size": 63488 00:13:06.286 }, 00:13:06.286 { 00:13:06.286 "name": "BaseBdev3", 00:13:06.286 "uuid": "9bd65a74-f3b5-4319-ab3d-677261308b4c", 00:13:06.286 "is_configured": true, 00:13:06.286 "data_offset": 2048, 00:13:06.286 "data_size": 63488 00:13:06.286 }, 00:13:06.286 { 00:13:06.286 "name": "BaseBdev4", 00:13:06.286 "uuid": "d45e1eed-6f79-490a-9d89-d4ffe1f4bad8", 00:13:06.286 "is_configured": true, 00:13:06.286 "data_offset": 2048, 00:13:06.286 "data_size": 63488 00:13:06.286 } 00:13:06.286 ] 00:13:06.286 }' 00:13:06.286 14:37:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.287 14:37:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.547 [2024-10-01 14:37:58.096159] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.547 "name": "Existed_Raid", 00:13:06.547 "uuid": "c9520c36-0d2e-4ad9-aa1b-3389051cff40", 00:13:06.547 "strip_size_kb": 64, 00:13:06.547 "state": "configuring", 00:13:06.547 "raid_level": "raid5f", 00:13:06.547 "superblock": true, 00:13:06.547 "num_base_bdevs": 4, 00:13:06.547 "num_base_bdevs_discovered": 2, 00:13:06.547 "num_base_bdevs_operational": 4, 00:13:06.547 "base_bdevs_list": [ 00:13:06.547 { 00:13:06.547 "name": null, 00:13:06.547 "uuid": "ee553bc7-84ce-4718-aea3-30b26c48c114", 00:13:06.547 "is_configured": false, 00:13:06.547 "data_offset": 0, 00:13:06.547 "data_size": 63488 00:13:06.547 }, 00:13:06.547 { 00:13:06.547 "name": null, 00:13:06.547 "uuid": "90732e38-2d9a-43b0-8b83-78f69b636edf", 00:13:06.547 "is_configured": false, 00:13:06.547 "data_offset": 0, 00:13:06.547 "data_size": 63488 00:13:06.547 }, 00:13:06.547 { 00:13:06.547 "name": "BaseBdev3", 00:13:06.547 "uuid": "9bd65a74-f3b5-4319-ab3d-677261308b4c", 00:13:06.547 "is_configured": true, 00:13:06.547 "data_offset": 2048, 00:13:06.547 "data_size": 63488 00:13:06.547 }, 00:13:06.547 { 00:13:06.547 "name": "BaseBdev4", 00:13:06.547 "uuid": "d45e1eed-6f79-490a-9d89-d4ffe1f4bad8", 00:13:06.547 "is_configured": true, 00:13:06.547 "data_offset": 2048, 00:13:06.547 "data_size": 63488 00:13:06.547 } 00:13:06.547 ] 00:13:06.547 }' 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.547 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.807 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.807 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.807 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.807 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.068 [2024-10-01 14:37:58.516555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.068 "name": "Existed_Raid", 00:13:07.068 "uuid": "c9520c36-0d2e-4ad9-aa1b-3389051cff40", 00:13:07.068 "strip_size_kb": 64, 00:13:07.068 "state": "configuring", 00:13:07.068 "raid_level": "raid5f", 00:13:07.068 "superblock": true, 00:13:07.068 "num_base_bdevs": 4, 00:13:07.068 "num_base_bdevs_discovered": 3, 00:13:07.068 "num_base_bdevs_operational": 4, 00:13:07.068 "base_bdevs_list": [ 00:13:07.068 { 00:13:07.068 "name": null, 00:13:07.068 "uuid": "ee553bc7-84ce-4718-aea3-30b26c48c114", 00:13:07.068 "is_configured": false, 00:13:07.068 "data_offset": 0, 00:13:07.068 "data_size": 63488 00:13:07.068 }, 00:13:07.068 { 00:13:07.068 "name": "BaseBdev2", 00:13:07.068 "uuid": "90732e38-2d9a-43b0-8b83-78f69b636edf", 00:13:07.068 "is_configured": true, 00:13:07.068 "data_offset": 2048, 00:13:07.068 "data_size": 63488 00:13:07.068 }, 00:13:07.068 { 00:13:07.068 "name": "BaseBdev3", 00:13:07.068 "uuid": "9bd65a74-f3b5-4319-ab3d-677261308b4c", 00:13:07.068 "is_configured": true, 00:13:07.068 "data_offset": 2048, 00:13:07.068 "data_size": 63488 00:13:07.068 }, 00:13:07.068 { 00:13:07.068 "name": "BaseBdev4", 00:13:07.068 "uuid": "d45e1eed-6f79-490a-9d89-d4ffe1f4bad8", 00:13:07.068 "is_configured": true, 00:13:07.068 "data_offset": 2048, 00:13:07.068 "data_size": 63488 00:13:07.068 } 00:13:07.068 ] 00:13:07.068 }' 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.068 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ee553bc7-84ce-4718-aea3-30b26c48c114 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.329 [2024-10-01 14:37:58.959567] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:07.329 [2024-10-01 14:37:58.959797] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:07.329 [2024-10-01 14:37:58.959811] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:07.329 NewBaseBdev 00:13:07.329 [2024-10-01 14:37:58.960061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.329 [2024-10-01 14:37:58.964837] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:07.329 [2024-10-01 14:37:58.964862] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:07.329 [2024-10-01 14:37:58.965002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.329 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.329 [ 00:13:07.329 { 00:13:07.329 "name": "NewBaseBdev", 00:13:07.329 "aliases": [ 00:13:07.329 "ee553bc7-84ce-4718-aea3-30b26c48c114" 00:13:07.329 ], 00:13:07.329 "product_name": "Malloc disk", 00:13:07.329 "block_size": 512, 00:13:07.329 "num_blocks": 65536, 00:13:07.329 "uuid": "ee553bc7-84ce-4718-aea3-30b26c48c114", 00:13:07.329 "assigned_rate_limits": { 00:13:07.329 "rw_ios_per_sec": 0, 00:13:07.329 "rw_mbytes_per_sec": 0, 00:13:07.329 "r_mbytes_per_sec": 0, 00:13:07.329 "w_mbytes_per_sec": 0 00:13:07.329 }, 00:13:07.329 "claimed": true, 00:13:07.329 "claim_type": "exclusive_write", 00:13:07.329 "zoned": false, 00:13:07.329 "supported_io_types": { 00:13:07.329 "read": true, 00:13:07.329 "write": true, 00:13:07.329 "unmap": true, 00:13:07.330 "flush": true, 00:13:07.330 "reset": true, 00:13:07.330 "nvme_admin": false, 00:13:07.330 "nvme_io": false, 00:13:07.330 "nvme_io_md": false, 00:13:07.330 "write_zeroes": true, 00:13:07.330 "zcopy": true, 00:13:07.330 "get_zone_info": false, 00:13:07.330 "zone_management": false, 00:13:07.330 "zone_append": false, 00:13:07.330 "compare": false, 00:13:07.330 "compare_and_write": false, 00:13:07.330 "abort": true, 00:13:07.330 "seek_hole": false, 00:13:07.330 "seek_data": false, 00:13:07.330 "copy": true, 00:13:07.330 "nvme_iov_md": false 00:13:07.330 }, 00:13:07.330 "memory_domains": [ 00:13:07.330 { 00:13:07.330 "dma_device_id": "system", 00:13:07.330 "dma_device_type": 1 00:13:07.330 }, 00:13:07.330 { 00:13:07.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.330 "dma_device_type": 2 00:13:07.330 } 00:13:07.330 ], 00:13:07.330 "driver_specific": {} 00:13:07.330 } 00:13:07.330 ] 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.330 14:37:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.330 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.591 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.591 "name": "Existed_Raid", 00:13:07.591 "uuid": "c9520c36-0d2e-4ad9-aa1b-3389051cff40", 00:13:07.591 "strip_size_kb": 64, 00:13:07.591 "state": "online", 00:13:07.591 "raid_level": "raid5f", 00:13:07.591 "superblock": true, 00:13:07.591 "num_base_bdevs": 4, 00:13:07.591 "num_base_bdevs_discovered": 4, 00:13:07.591 "num_base_bdevs_operational": 4, 00:13:07.591 "base_bdevs_list": [ 00:13:07.591 { 00:13:07.591 "name": "NewBaseBdev", 00:13:07.591 "uuid": "ee553bc7-84ce-4718-aea3-30b26c48c114", 00:13:07.591 "is_configured": true, 00:13:07.591 "data_offset": 2048, 00:13:07.591 "data_size": 63488 00:13:07.591 }, 00:13:07.591 { 00:13:07.591 "name": "BaseBdev2", 00:13:07.591 "uuid": "90732e38-2d9a-43b0-8b83-78f69b636edf", 00:13:07.591 "is_configured": true, 00:13:07.591 "data_offset": 2048, 00:13:07.591 "data_size": 63488 00:13:07.591 }, 00:13:07.591 { 00:13:07.591 "name": "BaseBdev3", 00:13:07.591 "uuid": "9bd65a74-f3b5-4319-ab3d-677261308b4c", 00:13:07.591 "is_configured": true, 00:13:07.591 "data_offset": 2048, 00:13:07.591 "data_size": 63488 00:13:07.591 }, 00:13:07.591 { 00:13:07.591 "name": "BaseBdev4", 00:13:07.591 "uuid": "d45e1eed-6f79-490a-9d89-d4ffe1f4bad8", 00:13:07.591 "is_configured": true, 00:13:07.591 "data_offset": 2048, 00:13:07.591 "data_size": 63488 00:13:07.591 } 00:13:07.591 ] 00:13:07.591 }' 00:13:07.591 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.591 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.851 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:07.851 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:07.851 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:07.851 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:07.851 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:07.852 [2024-10-01 14:37:59.318523] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:07.852 "name": "Existed_Raid", 00:13:07.852 "aliases": [ 00:13:07.852 "c9520c36-0d2e-4ad9-aa1b-3389051cff40" 00:13:07.852 ], 00:13:07.852 "product_name": "Raid Volume", 00:13:07.852 "block_size": 512, 00:13:07.852 "num_blocks": 190464, 00:13:07.852 "uuid": "c9520c36-0d2e-4ad9-aa1b-3389051cff40", 00:13:07.852 "assigned_rate_limits": { 00:13:07.852 "rw_ios_per_sec": 0, 00:13:07.852 "rw_mbytes_per_sec": 0, 00:13:07.852 "r_mbytes_per_sec": 0, 00:13:07.852 "w_mbytes_per_sec": 0 00:13:07.852 }, 00:13:07.852 "claimed": false, 00:13:07.852 "zoned": false, 00:13:07.852 "supported_io_types": { 00:13:07.852 "read": true, 00:13:07.852 "write": true, 00:13:07.852 "unmap": false, 00:13:07.852 "flush": false, 00:13:07.852 "reset": true, 00:13:07.852 "nvme_admin": false, 00:13:07.852 "nvme_io": false, 00:13:07.852 "nvme_io_md": false, 00:13:07.852 "write_zeroes": true, 00:13:07.852 "zcopy": false, 00:13:07.852 "get_zone_info": false, 00:13:07.852 "zone_management": false, 00:13:07.852 "zone_append": false, 00:13:07.852 "compare": false, 00:13:07.852 "compare_and_write": false, 00:13:07.852 "abort": false, 00:13:07.852 "seek_hole": false, 00:13:07.852 "seek_data": false, 00:13:07.852 "copy": false, 00:13:07.852 "nvme_iov_md": false 00:13:07.852 }, 00:13:07.852 "driver_specific": { 00:13:07.852 "raid": { 00:13:07.852 "uuid": "c9520c36-0d2e-4ad9-aa1b-3389051cff40", 00:13:07.852 "strip_size_kb": 64, 00:13:07.852 "state": "online", 00:13:07.852 "raid_level": "raid5f", 00:13:07.852 "superblock": true, 00:13:07.852 "num_base_bdevs": 4, 00:13:07.852 "num_base_bdevs_discovered": 4, 00:13:07.852 "num_base_bdevs_operational": 4, 00:13:07.852 "base_bdevs_list": [ 00:13:07.852 { 00:13:07.852 "name": "NewBaseBdev", 00:13:07.852 "uuid": "ee553bc7-84ce-4718-aea3-30b26c48c114", 00:13:07.852 "is_configured": true, 00:13:07.852 "data_offset": 2048, 00:13:07.852 "data_size": 63488 00:13:07.852 }, 00:13:07.852 { 00:13:07.852 "name": "BaseBdev2", 00:13:07.852 "uuid": "90732e38-2d9a-43b0-8b83-78f69b636edf", 00:13:07.852 "is_configured": true, 00:13:07.852 "data_offset": 2048, 00:13:07.852 "data_size": 63488 00:13:07.852 }, 00:13:07.852 { 00:13:07.852 "name": "BaseBdev3", 00:13:07.852 "uuid": "9bd65a74-f3b5-4319-ab3d-677261308b4c", 00:13:07.852 "is_configured": true, 00:13:07.852 "data_offset": 2048, 00:13:07.852 "data_size": 63488 00:13:07.852 }, 00:13:07.852 { 00:13:07.852 "name": "BaseBdev4", 00:13:07.852 "uuid": "d45e1eed-6f79-490a-9d89-d4ffe1f4bad8", 00:13:07.852 "is_configured": true, 00:13:07.852 "data_offset": 2048, 00:13:07.852 "data_size": 63488 00:13:07.852 } 00:13:07.852 ] 00:13:07.852 } 00:13:07.852 } 00:13:07.852 }' 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:07.852 BaseBdev2 00:13:07.852 BaseBdev3 00:13:07.852 BaseBdev4' 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.852 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 [2024-10-01 14:37:59.538332] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:08.112 [2024-10-01 14:37:59.538367] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.112 [2024-10-01 14:37:59.538443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.113 [2024-10-01 14:37:59.538758] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.113 [2024-10-01 14:37:59.538776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:08.113 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.113 14:37:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81349 00:13:08.113 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81349 ']' 00:13:08.113 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81349 00:13:08.113 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:08.113 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.113 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81349 00:13:08.113 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.113 killing process with pid 81349 00:13:08.113 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.113 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81349' 00:13:08.113 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81349 00:13:08.113 [2024-10-01 14:37:59.566108] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.113 14:37:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81349 00:13:08.374 [2024-10-01 14:37:59.811817] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.315 14:38:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:09.315 00:13:09.315 real 0m8.623s 00:13:09.315 user 0m13.632s 00:13:09.315 sys 0m1.465s 00:13:09.315 14:38:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.315 ************************************ 00:13:09.315 END TEST raid5f_state_function_test_sb 00:13:09.315 14:38:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 ************************************ 00:13:09.315 14:38:00 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:13:09.315 14:38:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:09.315 14:38:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.315 14:38:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 ************************************ 00:13:09.315 START TEST raid5f_superblock_test 00:13:09.315 ************************************ 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81986 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81986 00:13:09.315 14:38:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81986 ']' 00:13:09.316 14:38:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.316 14:38:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.316 14:38:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.316 14:38:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.316 14:38:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:09.316 14:38:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.316 [2024-10-01 14:38:00.760485] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:13:09.316 [2024-10-01 14:38:00.760613] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81986 ] 00:13:09.316 [2024-10-01 14:38:00.907624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.577 [2024-10-01 14:38:01.098457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.577 [2024-10-01 14:38:01.238944] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.577 [2024-10-01 14:38:01.238992] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.151 malloc1 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.151 [2024-10-01 14:38:01.651714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:10.151 [2024-10-01 14:38:01.651785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.151 [2024-10-01 14:38:01.651806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:10.151 [2024-10-01 14:38:01.651818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.151 [2024-10-01 14:38:01.653973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.151 [2024-10-01 14:38:01.654010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:10.151 pt1 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.151 malloc2 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.151 [2024-10-01 14:38:01.698234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:10.151 [2024-10-01 14:38:01.698306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.151 [2024-10-01 14:38:01.698327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:10.151 [2024-10-01 14:38:01.698336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.151 [2024-10-01 14:38:01.700467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.151 [2024-10-01 14:38:01.700505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:10.151 pt2 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.151 malloc3 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.151 [2024-10-01 14:38:01.734658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:10.151 [2024-10-01 14:38:01.734729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.151 [2024-10-01 14:38:01.734749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:10.151 [2024-10-01 14:38:01.734759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.151 [2024-10-01 14:38:01.736897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.151 [2024-10-01 14:38:01.736936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:10.151 pt3 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.151 malloc4 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.151 [2024-10-01 14:38:01.772251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:10.151 [2024-10-01 14:38:01.772318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.151 [2024-10-01 14:38:01.772337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:10.151 [2024-10-01 14:38:01.772347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.151 [2024-10-01 14:38:01.774490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.151 [2024-10-01 14:38:01.774529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:10.151 pt4 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.151 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.151 [2024-10-01 14:38:01.780371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:10.151 [2024-10-01 14:38:01.782698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:10.151 [2024-10-01 14:38:01.782816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:10.151 [2024-10-01 14:38:01.782892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:10.151 [2024-10-01 14:38:01.783138] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:10.151 [2024-10-01 14:38:01.783169] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:10.152 [2024-10-01 14:38:01.783467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:10.152 [2024-10-01 14:38:01.789391] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:10.152 [2024-10-01 14:38:01.789419] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:10.152 [2024-10-01 14:38:01.789628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.152 "name": "raid_bdev1", 00:13:10.152 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:10.152 "strip_size_kb": 64, 00:13:10.152 "state": "online", 00:13:10.152 "raid_level": "raid5f", 00:13:10.152 "superblock": true, 00:13:10.152 "num_base_bdevs": 4, 00:13:10.152 "num_base_bdevs_discovered": 4, 00:13:10.152 "num_base_bdevs_operational": 4, 00:13:10.152 "base_bdevs_list": [ 00:13:10.152 { 00:13:10.152 "name": "pt1", 00:13:10.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:10.152 "is_configured": true, 00:13:10.152 "data_offset": 2048, 00:13:10.152 "data_size": 63488 00:13:10.152 }, 00:13:10.152 { 00:13:10.152 "name": "pt2", 00:13:10.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:10.152 "is_configured": true, 00:13:10.152 "data_offset": 2048, 00:13:10.152 "data_size": 63488 00:13:10.152 }, 00:13:10.152 { 00:13:10.152 "name": "pt3", 00:13:10.152 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:10.152 "is_configured": true, 00:13:10.152 "data_offset": 2048, 00:13:10.152 "data_size": 63488 00:13:10.152 }, 00:13:10.152 { 00:13:10.152 "name": "pt4", 00:13:10.152 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:10.152 "is_configured": true, 00:13:10.152 "data_offset": 2048, 00:13:10.152 "data_size": 63488 00:13:10.152 } 00:13:10.152 ] 00:13:10.152 }' 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.152 14:38:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.724 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:10.724 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:10.724 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:10.724 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:10.724 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:10.724 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:10.724 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:10.724 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.724 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.724 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.724 [2024-10-01 14:38:02.127890] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.724 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.724 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:10.724 "name": "raid_bdev1", 00:13:10.724 "aliases": [ 00:13:10.724 "fdcbcf71-275c-41b0-9062-4488ed2efbe7" 00:13:10.724 ], 00:13:10.724 "product_name": "Raid Volume", 00:13:10.724 "block_size": 512, 00:13:10.724 "num_blocks": 190464, 00:13:10.724 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:10.724 "assigned_rate_limits": { 00:13:10.724 "rw_ios_per_sec": 0, 00:13:10.724 "rw_mbytes_per_sec": 0, 00:13:10.724 "r_mbytes_per_sec": 0, 00:13:10.724 "w_mbytes_per_sec": 0 00:13:10.724 }, 00:13:10.724 "claimed": false, 00:13:10.724 "zoned": false, 00:13:10.724 "supported_io_types": { 00:13:10.724 "read": true, 00:13:10.724 "write": true, 00:13:10.724 "unmap": false, 00:13:10.724 "flush": false, 00:13:10.724 "reset": true, 00:13:10.724 "nvme_admin": false, 00:13:10.724 "nvme_io": false, 00:13:10.724 "nvme_io_md": false, 00:13:10.724 "write_zeroes": true, 00:13:10.724 "zcopy": false, 00:13:10.724 "get_zone_info": false, 00:13:10.724 "zone_management": false, 00:13:10.724 "zone_append": false, 00:13:10.724 "compare": false, 00:13:10.724 "compare_and_write": false, 00:13:10.724 "abort": false, 00:13:10.724 "seek_hole": false, 00:13:10.724 "seek_data": false, 00:13:10.724 "copy": false, 00:13:10.724 "nvme_iov_md": false 00:13:10.724 }, 00:13:10.724 "driver_specific": { 00:13:10.724 "raid": { 00:13:10.724 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:10.724 "strip_size_kb": 64, 00:13:10.724 "state": "online", 00:13:10.724 "raid_level": "raid5f", 00:13:10.724 "superblock": true, 00:13:10.724 "num_base_bdevs": 4, 00:13:10.724 "num_base_bdevs_discovered": 4, 00:13:10.724 "num_base_bdevs_operational": 4, 00:13:10.724 "base_bdevs_list": [ 00:13:10.724 { 00:13:10.724 "name": "pt1", 00:13:10.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:10.724 "is_configured": true, 00:13:10.724 "data_offset": 2048, 00:13:10.724 "data_size": 63488 00:13:10.724 }, 00:13:10.724 { 00:13:10.724 "name": "pt2", 00:13:10.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:10.724 "is_configured": true, 00:13:10.724 "data_offset": 2048, 00:13:10.724 "data_size": 63488 00:13:10.724 }, 00:13:10.724 { 00:13:10.724 "name": "pt3", 00:13:10.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:10.724 "is_configured": true, 00:13:10.725 "data_offset": 2048, 00:13:10.725 "data_size": 63488 00:13:10.725 }, 00:13:10.725 { 00:13:10.725 "name": "pt4", 00:13:10.725 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:10.725 "is_configured": true, 00:13:10.725 "data_offset": 2048, 00:13:10.725 "data_size": 63488 00:13:10.725 } 00:13:10.725 ] 00:13:10.725 } 00:13:10.725 } 00:13:10.725 }' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:10.725 pt2 00:13:10.725 pt3 00:13:10.725 pt4' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:10.725 [2024-10-01 14:38:02.351913] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fdcbcf71-275c-41b0-9062-4488ed2efbe7 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fdcbcf71-275c-41b0-9062-4488ed2efbe7 ']' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.725 [2024-10-01 14:38:02.379729] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.725 [2024-10-01 14:38:02.379762] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.725 [2024-10-01 14:38:02.379837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.725 [2024-10-01 14:38:02.379926] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.725 [2024-10-01 14:38:02.379942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.725 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.987 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.987 [2024-10-01 14:38:02.495783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:10.987 [2024-10-01 14:38:02.497671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:10.987 [2024-10-01 14:38:02.497738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:10.987 [2024-10-01 14:38:02.497775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:10.987 [2024-10-01 14:38:02.497822] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:10.987 [2024-10-01 14:38:02.497883] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:10.987 [2024-10-01 14:38:02.497905] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:10.987 [2024-10-01 14:38:02.497925] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:10.987 [2024-10-01 14:38:02.497938] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.987 [2024-10-01 14:38:02.497952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:10.987 request: 00:13:10.987 { 00:13:10.987 "name": "raid_bdev1", 00:13:10.987 "raid_level": "raid5f", 00:13:10.988 "base_bdevs": [ 00:13:10.988 "malloc1", 00:13:10.988 "malloc2", 00:13:10.988 "malloc3", 00:13:10.988 "malloc4" 00:13:10.988 ], 00:13:10.988 "strip_size_kb": 64, 00:13:10.988 "superblock": false, 00:13:10.988 "method": "bdev_raid_create", 00:13:10.988 "req_id": 1 00:13:10.988 } 00:13:10.988 Got JSON-RPC error response 00:13:10.988 response: 00:13:10.988 { 00:13:10.988 "code": -17, 00:13:10.988 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:10.988 } 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.988 [2024-10-01 14:38:02.539779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:10.988 [2024-10-01 14:38:02.539843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.988 [2024-10-01 14:38:02.539860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:10.988 [2024-10-01 14:38:02.539872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.988 [2024-10-01 14:38:02.542056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.988 [2024-10-01 14:38:02.542101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:10.988 [2024-10-01 14:38:02.542175] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:10.988 [2024-10-01 14:38:02.542232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:10.988 pt1 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.988 "name": "raid_bdev1", 00:13:10.988 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:10.988 "strip_size_kb": 64, 00:13:10.988 "state": "configuring", 00:13:10.988 "raid_level": "raid5f", 00:13:10.988 "superblock": true, 00:13:10.988 "num_base_bdevs": 4, 00:13:10.988 "num_base_bdevs_discovered": 1, 00:13:10.988 "num_base_bdevs_operational": 4, 00:13:10.988 "base_bdevs_list": [ 00:13:10.988 { 00:13:10.988 "name": "pt1", 00:13:10.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:10.988 "is_configured": true, 00:13:10.988 "data_offset": 2048, 00:13:10.988 "data_size": 63488 00:13:10.988 }, 00:13:10.988 { 00:13:10.988 "name": null, 00:13:10.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:10.988 "is_configured": false, 00:13:10.988 "data_offset": 2048, 00:13:10.988 "data_size": 63488 00:13:10.988 }, 00:13:10.988 { 00:13:10.988 "name": null, 00:13:10.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:10.988 "is_configured": false, 00:13:10.988 "data_offset": 2048, 00:13:10.988 "data_size": 63488 00:13:10.988 }, 00:13:10.988 { 00:13:10.988 "name": null, 00:13:10.988 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:10.988 "is_configured": false, 00:13:10.988 "data_offset": 2048, 00:13:10.988 "data_size": 63488 00:13:10.988 } 00:13:10.988 ] 00:13:10.988 }' 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.988 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.250 [2024-10-01 14:38:02.887832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:11.250 [2024-10-01 14:38:02.887898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.250 [2024-10-01 14:38:02.887915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:11.250 [2024-10-01 14:38:02.887926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.250 [2024-10-01 14:38:02.888330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.250 [2024-10-01 14:38:02.888353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:11.250 [2024-10-01 14:38:02.888419] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:11.250 [2024-10-01 14:38:02.888441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:11.250 pt2 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.250 [2024-10-01 14:38:02.895840] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.250 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.511 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.511 "name": "raid_bdev1", 00:13:11.511 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:11.511 "strip_size_kb": 64, 00:13:11.511 "state": "configuring", 00:13:11.511 "raid_level": "raid5f", 00:13:11.511 "superblock": true, 00:13:11.511 "num_base_bdevs": 4, 00:13:11.511 "num_base_bdevs_discovered": 1, 00:13:11.511 "num_base_bdevs_operational": 4, 00:13:11.511 "base_bdevs_list": [ 00:13:11.511 { 00:13:11.511 "name": "pt1", 00:13:11.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.511 "is_configured": true, 00:13:11.511 "data_offset": 2048, 00:13:11.511 "data_size": 63488 00:13:11.511 }, 00:13:11.511 { 00:13:11.511 "name": null, 00:13:11.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.511 "is_configured": false, 00:13:11.511 "data_offset": 0, 00:13:11.511 "data_size": 63488 00:13:11.511 }, 00:13:11.511 { 00:13:11.511 "name": null, 00:13:11.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.511 "is_configured": false, 00:13:11.511 "data_offset": 2048, 00:13:11.511 "data_size": 63488 00:13:11.511 }, 00:13:11.511 { 00:13:11.511 "name": null, 00:13:11.511 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.511 "is_configured": false, 00:13:11.511 "data_offset": 2048, 00:13:11.511 "data_size": 63488 00:13:11.511 } 00:13:11.511 ] 00:13:11.511 }' 00:13:11.511 14:38:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.511 14:38:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.773 [2024-10-01 14:38:03.223917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:11.773 [2024-10-01 14:38:03.223977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.773 [2024-10-01 14:38:03.223994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:11.773 [2024-10-01 14:38:03.224004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.773 [2024-10-01 14:38:03.224396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.773 [2024-10-01 14:38:03.224416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:11.773 [2024-10-01 14:38:03.224486] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:11.773 [2024-10-01 14:38:03.224508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:11.773 pt2 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.773 [2024-10-01 14:38:03.231910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:11.773 [2024-10-01 14:38:03.231955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.773 [2024-10-01 14:38:03.231972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:11.773 [2024-10-01 14:38:03.231981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.773 [2024-10-01 14:38:03.232332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.773 [2024-10-01 14:38:03.232350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:11.773 [2024-10-01 14:38:03.232408] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:11.773 [2024-10-01 14:38:03.232424] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:11.773 pt3 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.773 [2024-10-01 14:38:03.239883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:11.773 [2024-10-01 14:38:03.239931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.773 [2024-10-01 14:38:03.239945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:11.773 [2024-10-01 14:38:03.239953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.773 [2024-10-01 14:38:03.240285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.773 [2024-10-01 14:38:03.240309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:11.773 [2024-10-01 14:38:03.240361] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:11.773 [2024-10-01 14:38:03.240379] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:11.773 [2024-10-01 14:38:03.240507] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:11.773 [2024-10-01 14:38:03.240515] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:11.773 [2024-10-01 14:38:03.240774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:11.773 [2024-10-01 14:38:03.245373] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:11.773 [2024-10-01 14:38:03.245397] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:11.773 [2024-10-01 14:38:03.245551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.773 pt4 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.773 "name": "raid_bdev1", 00:13:11.773 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:11.773 "strip_size_kb": 64, 00:13:11.773 "state": "online", 00:13:11.773 "raid_level": "raid5f", 00:13:11.773 "superblock": true, 00:13:11.773 "num_base_bdevs": 4, 00:13:11.773 "num_base_bdevs_discovered": 4, 00:13:11.773 "num_base_bdevs_operational": 4, 00:13:11.773 "base_bdevs_list": [ 00:13:11.773 { 00:13:11.773 "name": "pt1", 00:13:11.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.773 "is_configured": true, 00:13:11.773 "data_offset": 2048, 00:13:11.773 "data_size": 63488 00:13:11.773 }, 00:13:11.773 { 00:13:11.773 "name": "pt2", 00:13:11.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.773 "is_configured": true, 00:13:11.773 "data_offset": 2048, 00:13:11.773 "data_size": 63488 00:13:11.773 }, 00:13:11.773 { 00:13:11.773 "name": "pt3", 00:13:11.773 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.773 "is_configured": true, 00:13:11.773 "data_offset": 2048, 00:13:11.773 "data_size": 63488 00:13:11.773 }, 00:13:11.773 { 00:13:11.773 "name": "pt4", 00:13:11.773 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.773 "is_configured": true, 00:13:11.773 "data_offset": 2048, 00:13:11.773 "data_size": 63488 00:13:11.773 } 00:13:11.773 ] 00:13:11.773 }' 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.773 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:12.034 [2024-10-01 14:38:03.563160] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:12.034 "name": "raid_bdev1", 00:13:12.034 "aliases": [ 00:13:12.034 "fdcbcf71-275c-41b0-9062-4488ed2efbe7" 00:13:12.034 ], 00:13:12.034 "product_name": "Raid Volume", 00:13:12.034 "block_size": 512, 00:13:12.034 "num_blocks": 190464, 00:13:12.034 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:12.034 "assigned_rate_limits": { 00:13:12.034 "rw_ios_per_sec": 0, 00:13:12.034 "rw_mbytes_per_sec": 0, 00:13:12.034 "r_mbytes_per_sec": 0, 00:13:12.034 "w_mbytes_per_sec": 0 00:13:12.034 }, 00:13:12.034 "claimed": false, 00:13:12.034 "zoned": false, 00:13:12.034 "supported_io_types": { 00:13:12.034 "read": true, 00:13:12.034 "write": true, 00:13:12.034 "unmap": false, 00:13:12.034 "flush": false, 00:13:12.034 "reset": true, 00:13:12.034 "nvme_admin": false, 00:13:12.034 "nvme_io": false, 00:13:12.034 "nvme_io_md": false, 00:13:12.034 "write_zeroes": true, 00:13:12.034 "zcopy": false, 00:13:12.034 "get_zone_info": false, 00:13:12.034 "zone_management": false, 00:13:12.034 "zone_append": false, 00:13:12.034 "compare": false, 00:13:12.034 "compare_and_write": false, 00:13:12.034 "abort": false, 00:13:12.034 "seek_hole": false, 00:13:12.034 "seek_data": false, 00:13:12.034 "copy": false, 00:13:12.034 "nvme_iov_md": false 00:13:12.034 }, 00:13:12.034 "driver_specific": { 00:13:12.034 "raid": { 00:13:12.034 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:12.034 "strip_size_kb": 64, 00:13:12.034 "state": "online", 00:13:12.034 "raid_level": "raid5f", 00:13:12.034 "superblock": true, 00:13:12.034 "num_base_bdevs": 4, 00:13:12.034 "num_base_bdevs_discovered": 4, 00:13:12.034 "num_base_bdevs_operational": 4, 00:13:12.034 "base_bdevs_list": [ 00:13:12.034 { 00:13:12.034 "name": "pt1", 00:13:12.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.034 "is_configured": true, 00:13:12.034 "data_offset": 2048, 00:13:12.034 "data_size": 63488 00:13:12.034 }, 00:13:12.034 { 00:13:12.034 "name": "pt2", 00:13:12.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.034 "is_configured": true, 00:13:12.034 "data_offset": 2048, 00:13:12.034 "data_size": 63488 00:13:12.034 }, 00:13:12.034 { 00:13:12.034 "name": "pt3", 00:13:12.034 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.034 "is_configured": true, 00:13:12.034 "data_offset": 2048, 00:13:12.034 "data_size": 63488 00:13:12.034 }, 00:13:12.034 { 00:13:12.034 "name": "pt4", 00:13:12.034 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:12.034 "is_configured": true, 00:13:12.034 "data_offset": 2048, 00:13:12.034 "data_size": 63488 00:13:12.034 } 00:13:12.034 ] 00:13:12.034 } 00:13:12.034 } 00:13:12.034 }' 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:12.034 pt2 00:13:12.034 pt3 00:13:12.034 pt4' 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.034 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.295 [2024-10-01 14:38:03.791181] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.295 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fdcbcf71-275c-41b0-9062-4488ed2efbe7 '!=' fdcbcf71-275c-41b0-9062-4488ed2efbe7 ']' 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.296 [2024-10-01 14:38:03.827042] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.296 "name": "raid_bdev1", 00:13:12.296 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:12.296 "strip_size_kb": 64, 00:13:12.296 "state": "online", 00:13:12.296 "raid_level": "raid5f", 00:13:12.296 "superblock": true, 00:13:12.296 "num_base_bdevs": 4, 00:13:12.296 "num_base_bdevs_discovered": 3, 00:13:12.296 "num_base_bdevs_operational": 3, 00:13:12.296 "base_bdevs_list": [ 00:13:12.296 { 00:13:12.296 "name": null, 00:13:12.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.296 "is_configured": false, 00:13:12.296 "data_offset": 0, 00:13:12.296 "data_size": 63488 00:13:12.296 }, 00:13:12.296 { 00:13:12.296 "name": "pt2", 00:13:12.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.296 "is_configured": true, 00:13:12.296 "data_offset": 2048, 00:13:12.296 "data_size": 63488 00:13:12.296 }, 00:13:12.296 { 00:13:12.296 "name": "pt3", 00:13:12.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.296 "is_configured": true, 00:13:12.296 "data_offset": 2048, 00:13:12.296 "data_size": 63488 00:13:12.296 }, 00:13:12.296 { 00:13:12.296 "name": "pt4", 00:13:12.296 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:12.296 "is_configured": true, 00:13:12.296 "data_offset": 2048, 00:13:12.296 "data_size": 63488 00:13:12.296 } 00:13:12.296 ] 00:13:12.296 }' 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.296 14:38:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.557 [2024-10-01 14:38:04.147083] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:12.557 [2024-10-01 14:38:04.147120] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.557 [2024-10-01 14:38:04.147190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.557 [2024-10-01 14:38:04.147265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.557 [2024-10-01 14:38:04.147281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:12.557 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.558 [2024-10-01 14:38:04.211099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:12.558 [2024-10-01 14:38:04.211164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.558 [2024-10-01 14:38:04.211182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:12.558 [2024-10-01 14:38:04.211192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.558 [2024-10-01 14:38:04.213381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.558 [2024-10-01 14:38:04.213417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:12.558 [2024-10-01 14:38:04.213492] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:12.558 [2024-10-01 14:38:04.213531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.558 pt2 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.558 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.819 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.819 "name": "raid_bdev1", 00:13:12.819 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:12.819 "strip_size_kb": 64, 00:13:12.819 "state": "configuring", 00:13:12.819 "raid_level": "raid5f", 00:13:12.819 "superblock": true, 00:13:12.819 "num_base_bdevs": 4, 00:13:12.819 "num_base_bdevs_discovered": 1, 00:13:12.819 "num_base_bdevs_operational": 3, 00:13:12.819 "base_bdevs_list": [ 00:13:12.819 { 00:13:12.819 "name": null, 00:13:12.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.819 "is_configured": false, 00:13:12.819 "data_offset": 2048, 00:13:12.819 "data_size": 63488 00:13:12.819 }, 00:13:12.819 { 00:13:12.819 "name": "pt2", 00:13:12.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.819 "is_configured": true, 00:13:12.819 "data_offset": 2048, 00:13:12.819 "data_size": 63488 00:13:12.819 }, 00:13:12.819 { 00:13:12.819 "name": null, 00:13:12.819 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.819 "is_configured": false, 00:13:12.819 "data_offset": 2048, 00:13:12.819 "data_size": 63488 00:13:12.819 }, 00:13:12.819 { 00:13:12.819 "name": null, 00:13:12.819 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:12.819 "is_configured": false, 00:13:12.819 "data_offset": 2048, 00:13:12.819 "data_size": 63488 00:13:12.819 } 00:13:12.819 ] 00:13:12.819 }' 00:13:12.819 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.819 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.081 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:13.081 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:13.081 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:13.081 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.082 [2024-10-01 14:38:04.511192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:13.082 [2024-10-01 14:38:04.511253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.082 [2024-10-01 14:38:04.511272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:13.082 [2024-10-01 14:38:04.511281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.082 [2024-10-01 14:38:04.511678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.082 [2024-10-01 14:38:04.511700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:13.082 [2024-10-01 14:38:04.511786] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:13.082 [2024-10-01 14:38:04.511818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:13.082 pt3 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.082 "name": "raid_bdev1", 00:13:13.082 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:13.082 "strip_size_kb": 64, 00:13:13.082 "state": "configuring", 00:13:13.082 "raid_level": "raid5f", 00:13:13.082 "superblock": true, 00:13:13.082 "num_base_bdevs": 4, 00:13:13.082 "num_base_bdevs_discovered": 2, 00:13:13.082 "num_base_bdevs_operational": 3, 00:13:13.082 "base_bdevs_list": [ 00:13:13.082 { 00:13:13.082 "name": null, 00:13:13.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.082 "is_configured": false, 00:13:13.082 "data_offset": 2048, 00:13:13.082 "data_size": 63488 00:13:13.082 }, 00:13:13.082 { 00:13:13.082 "name": "pt2", 00:13:13.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.082 "is_configured": true, 00:13:13.082 "data_offset": 2048, 00:13:13.082 "data_size": 63488 00:13:13.082 }, 00:13:13.082 { 00:13:13.082 "name": "pt3", 00:13:13.082 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.082 "is_configured": true, 00:13:13.082 "data_offset": 2048, 00:13:13.082 "data_size": 63488 00:13:13.082 }, 00:13:13.082 { 00:13:13.082 "name": null, 00:13:13.082 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.082 "is_configured": false, 00:13:13.082 "data_offset": 2048, 00:13:13.082 "data_size": 63488 00:13:13.082 } 00:13:13.082 ] 00:13:13.082 }' 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.082 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.411 [2024-10-01 14:38:04.839264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:13.411 [2024-10-01 14:38:04.839327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.411 [2024-10-01 14:38:04.839348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:13.411 [2024-10-01 14:38:04.839359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.411 [2024-10-01 14:38:04.839791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.411 [2024-10-01 14:38:04.839819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:13.411 [2024-10-01 14:38:04.839892] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:13.411 [2024-10-01 14:38:04.839918] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:13.411 [2024-10-01 14:38:04.840041] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:13.411 [2024-10-01 14:38:04.840055] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:13.411 [2024-10-01 14:38:04.840297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:13.411 [2024-10-01 14:38:04.844978] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:13.411 [2024-10-01 14:38:04.845005] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:13.411 pt4 00:13:13.411 [2024-10-01 14:38:04.845264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.411 "name": "raid_bdev1", 00:13:13.411 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:13.411 "strip_size_kb": 64, 00:13:13.411 "state": "online", 00:13:13.411 "raid_level": "raid5f", 00:13:13.411 "superblock": true, 00:13:13.411 "num_base_bdevs": 4, 00:13:13.411 "num_base_bdevs_discovered": 3, 00:13:13.411 "num_base_bdevs_operational": 3, 00:13:13.411 "base_bdevs_list": [ 00:13:13.411 { 00:13:13.411 "name": null, 00:13:13.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.411 "is_configured": false, 00:13:13.411 "data_offset": 2048, 00:13:13.411 "data_size": 63488 00:13:13.411 }, 00:13:13.411 { 00:13:13.411 "name": "pt2", 00:13:13.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.411 "is_configured": true, 00:13:13.411 "data_offset": 2048, 00:13:13.411 "data_size": 63488 00:13:13.411 }, 00:13:13.411 { 00:13:13.411 "name": "pt3", 00:13:13.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.411 "is_configured": true, 00:13:13.411 "data_offset": 2048, 00:13:13.411 "data_size": 63488 00:13:13.411 }, 00:13:13.411 { 00:13:13.411 "name": "pt4", 00:13:13.411 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.411 "is_configured": true, 00:13:13.411 "data_offset": 2048, 00:13:13.411 "data_size": 63488 00:13:13.411 } 00:13:13.411 ] 00:13:13.411 }' 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.411 14:38:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.674 [2024-10-01 14:38:05.162566] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.674 [2024-10-01 14:38:05.162601] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.674 [2024-10-01 14:38:05.162669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.674 [2024-10-01 14:38:05.162756] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.674 [2024-10-01 14:38:05.162769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.674 [2024-10-01 14:38:05.214582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:13.674 [2024-10-01 14:38:05.214647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.674 [2024-10-01 14:38:05.214662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:13.674 [2024-10-01 14:38:05.214673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.674 [2024-10-01 14:38:05.216882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.674 [2024-10-01 14:38:05.216921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:13.674 [2024-10-01 14:38:05.216995] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:13.674 [2024-10-01 14:38:05.217042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:13.674 [2024-10-01 14:38:05.217154] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:13.674 [2024-10-01 14:38:05.217169] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.674 [2024-10-01 14:38:05.217184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:13.674 [2024-10-01 14:38:05.217234] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:13.674 [2024-10-01 14:38:05.217327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:13.674 pt1 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.674 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.674 "name": "raid_bdev1", 00:13:13.674 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:13.674 "strip_size_kb": 64, 00:13:13.674 "state": "configuring", 00:13:13.674 "raid_level": "raid5f", 00:13:13.674 "superblock": true, 00:13:13.674 "num_base_bdevs": 4, 00:13:13.674 "num_base_bdevs_discovered": 2, 00:13:13.674 "num_base_bdevs_operational": 3, 00:13:13.674 "base_bdevs_list": [ 00:13:13.674 { 00:13:13.674 "name": null, 00:13:13.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.674 "is_configured": false, 00:13:13.674 "data_offset": 2048, 00:13:13.674 "data_size": 63488 00:13:13.674 }, 00:13:13.674 { 00:13:13.674 "name": "pt2", 00:13:13.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.674 "is_configured": true, 00:13:13.674 "data_offset": 2048, 00:13:13.674 "data_size": 63488 00:13:13.674 }, 00:13:13.674 { 00:13:13.674 "name": "pt3", 00:13:13.674 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.674 "is_configured": true, 00:13:13.674 "data_offset": 2048, 00:13:13.674 "data_size": 63488 00:13:13.674 }, 00:13:13.674 { 00:13:13.674 "name": null, 00:13:13.675 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.675 "is_configured": false, 00:13:13.675 "data_offset": 2048, 00:13:13.675 "data_size": 63488 00:13:13.675 } 00:13:13.675 ] 00:13:13.675 }' 00:13:13.675 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.675 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.936 [2024-10-01 14:38:05.542688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:13.936 [2024-10-01 14:38:05.542768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.936 [2024-10-01 14:38:05.542793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:13.936 [2024-10-01 14:38:05.542803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.936 [2024-10-01 14:38:05.543209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.936 [2024-10-01 14:38:05.543231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:13.936 [2024-10-01 14:38:05.543306] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:13.936 [2024-10-01 14:38:05.543328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:13.936 [2024-10-01 14:38:05.543451] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:13.936 [2024-10-01 14:38:05.543465] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:13.936 [2024-10-01 14:38:05.543722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:13.936 [2024-10-01 14:38:05.548300] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:13.936 [2024-10-01 14:38:05.548327] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:13.936 pt4 00:13:13.936 [2024-10-01 14:38:05.548583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.936 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.936 "name": "raid_bdev1", 00:13:13.936 "uuid": "fdcbcf71-275c-41b0-9062-4488ed2efbe7", 00:13:13.936 "strip_size_kb": 64, 00:13:13.936 "state": "online", 00:13:13.936 "raid_level": "raid5f", 00:13:13.936 "superblock": true, 00:13:13.936 "num_base_bdevs": 4, 00:13:13.936 "num_base_bdevs_discovered": 3, 00:13:13.936 "num_base_bdevs_operational": 3, 00:13:13.937 "base_bdevs_list": [ 00:13:13.937 { 00:13:13.937 "name": null, 00:13:13.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.937 "is_configured": false, 00:13:13.937 "data_offset": 2048, 00:13:13.937 "data_size": 63488 00:13:13.937 }, 00:13:13.937 { 00:13:13.937 "name": "pt2", 00:13:13.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.937 "is_configured": true, 00:13:13.937 "data_offset": 2048, 00:13:13.937 "data_size": 63488 00:13:13.937 }, 00:13:13.937 { 00:13:13.937 "name": "pt3", 00:13:13.937 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.937 "is_configured": true, 00:13:13.937 "data_offset": 2048, 00:13:13.937 "data_size": 63488 00:13:13.937 }, 00:13:13.937 { 00:13:13.937 "name": "pt4", 00:13:13.937 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.937 "is_configured": true, 00:13:13.937 "data_offset": 2048, 00:13:13.937 "data_size": 63488 00:13:13.937 } 00:13:13.937 ] 00:13:13.937 }' 00:13:13.937 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.937 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.196 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:14.196 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:14.196 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.196 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.196 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.458 [2024-10-01 14:38:05.894093] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fdcbcf71-275c-41b0-9062-4488ed2efbe7 '!=' fdcbcf71-275c-41b0-9062-4488ed2efbe7 ']' 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81986 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81986 ']' 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81986 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81986 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.458 killing process with pid 81986 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81986' 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 81986 00:13:14.458 [2024-10-01 14:38:05.941745] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.458 14:38:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 81986 00:13:14.458 [2024-10-01 14:38:05.941853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.458 [2024-10-01 14:38:05.941938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.458 [2024-10-01 14:38:05.941995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:14.719 [2024-10-01 14:38:06.202971] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.661 14:38:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:15.662 00:13:15.662 real 0m6.379s 00:13:15.662 user 0m9.857s 00:13:15.662 sys 0m1.104s 00:13:15.662 ************************************ 00:13:15.662 END TEST raid5f_superblock_test 00:13:15.662 ************************************ 00:13:15.662 14:38:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:15.662 14:38:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.662 14:38:07 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:15.662 14:38:07 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:13:15.662 14:38:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:15.662 14:38:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.662 14:38:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.662 ************************************ 00:13:15.662 START TEST raid5f_rebuild_test 00:13:15.662 ************************************ 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82455 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82455 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 82455 ']' 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:15.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:15.662 14:38:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.662 [2024-10-01 14:38:07.197240] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:13:15.662 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:15.662 Zero copy mechanism will not be used. 00:13:15.662 [2024-10-01 14:38:07.197376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82455 ] 00:13:15.921 [2024-10-01 14:38:07.348727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.921 [2024-10-01 14:38:07.574177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.182 [2024-10-01 14:38:07.722838] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.182 [2024-10-01 14:38:07.722882] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.442 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:16.442 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:16.442 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:16.442 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:16.442 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.442 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.442 BaseBdev1_malloc 00:13:16.442 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.442 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:16.442 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.442 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.443 [2024-10-01 14:38:08.077624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:16.443 [2024-10-01 14:38:08.077696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.443 [2024-10-01 14:38:08.077733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:16.443 [2024-10-01 14:38:08.077749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.443 [2024-10-01 14:38:08.080145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.443 [2024-10-01 14:38:08.080190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:16.443 BaseBdev1 00:13:16.443 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.443 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:16.443 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:16.443 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.443 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.443 BaseBdev2_malloc 00:13:16.443 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.443 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:16.702 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.702 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.702 [2024-10-01 14:38:08.128947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:16.702 [2024-10-01 14:38:08.129018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.702 [2024-10-01 14:38:08.129039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:16.702 [2024-10-01 14:38:08.129052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.702 [2024-10-01 14:38:08.131362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.702 [2024-10-01 14:38:08.131407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:16.702 BaseBdev2 00:13:16.702 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.702 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:16.702 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:16.702 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.702 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.702 BaseBdev3_malloc 00:13:16.702 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.703 [2024-10-01 14:38:08.171598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:16.703 [2024-10-01 14:38:08.171661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.703 [2024-10-01 14:38:08.171683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:16.703 [2024-10-01 14:38:08.171694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.703 [2024-10-01 14:38:08.173997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.703 [2024-10-01 14:38:08.174037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:16.703 BaseBdev3 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.703 BaseBdev4_malloc 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.703 [2024-10-01 14:38:08.214008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:16.703 [2024-10-01 14:38:08.214063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.703 [2024-10-01 14:38:08.214085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:16.703 [2024-10-01 14:38:08.214097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.703 [2024-10-01 14:38:08.216349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.703 [2024-10-01 14:38:08.216388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:16.703 BaseBdev4 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.703 spare_malloc 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.703 spare_delay 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.703 [2024-10-01 14:38:08.260321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:16.703 [2024-10-01 14:38:08.260378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.703 [2024-10-01 14:38:08.260398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:16.703 [2024-10-01 14:38:08.260409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.703 [2024-10-01 14:38:08.262679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.703 [2024-10-01 14:38:08.262726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:16.703 spare 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.703 [2024-10-01 14:38:08.268386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.703 [2024-10-01 14:38:08.270331] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.703 [2024-10-01 14:38:08.270399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.703 [2024-10-01 14:38:08.270452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:16.703 [2024-10-01 14:38:08.270541] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:16.703 [2024-10-01 14:38:08.270560] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:16.703 [2024-10-01 14:38:08.270852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:16.703 [2024-10-01 14:38:08.275836] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:16.703 [2024-10-01 14:38:08.275858] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:16.703 [2024-10-01 14:38:08.276045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.703 "name": "raid_bdev1", 00:13:16.703 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:16.703 "strip_size_kb": 64, 00:13:16.703 "state": "online", 00:13:16.703 "raid_level": "raid5f", 00:13:16.703 "superblock": false, 00:13:16.703 "num_base_bdevs": 4, 00:13:16.703 "num_base_bdevs_discovered": 4, 00:13:16.703 "num_base_bdevs_operational": 4, 00:13:16.703 "base_bdevs_list": [ 00:13:16.703 { 00:13:16.703 "name": "BaseBdev1", 00:13:16.703 "uuid": "39ba6c9a-ac7e-5872-99a7-cb024d275b73", 00:13:16.703 "is_configured": true, 00:13:16.703 "data_offset": 0, 00:13:16.703 "data_size": 65536 00:13:16.703 }, 00:13:16.703 { 00:13:16.703 "name": "BaseBdev2", 00:13:16.703 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:16.703 "is_configured": true, 00:13:16.703 "data_offset": 0, 00:13:16.703 "data_size": 65536 00:13:16.703 }, 00:13:16.703 { 00:13:16.703 "name": "BaseBdev3", 00:13:16.703 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:16.703 "is_configured": true, 00:13:16.703 "data_offset": 0, 00:13:16.703 "data_size": 65536 00:13:16.703 }, 00:13:16.703 { 00:13:16.703 "name": "BaseBdev4", 00:13:16.703 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:16.703 "is_configured": true, 00:13:16.703 "data_offset": 0, 00:13:16.703 "data_size": 65536 00:13:16.703 } 00:13:16.703 ] 00:13:16.703 }' 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.703 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.962 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:16.962 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:16.962 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.962 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.962 [2024-10-01 14:38:08.585978] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.962 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.962 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:13:16.962 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:16.963 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:17.246 [2024-10-01 14:38:08.833883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:17.246 /dev/nbd0 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.246 1+0 records in 00:13:17.246 1+0 records out 00:13:17.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297197 s, 13.8 MB/s 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:13:17.246 14:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:13:17.818 512+0 records in 00:13:17.818 512+0 records out 00:13:17.818 100663296 bytes (101 MB, 96 MiB) copied, 0.522129 s, 193 MB/s 00:13:17.818 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:17.818 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.818 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:17.818 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.818 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:17.818 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.818 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:18.078 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.078 [2024-10-01 14:38:09.628858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.079 [2024-10-01 14:38:09.642279] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.079 "name": "raid_bdev1", 00:13:18.079 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:18.079 "strip_size_kb": 64, 00:13:18.079 "state": "online", 00:13:18.079 "raid_level": "raid5f", 00:13:18.079 "superblock": false, 00:13:18.079 "num_base_bdevs": 4, 00:13:18.079 "num_base_bdevs_discovered": 3, 00:13:18.079 "num_base_bdevs_operational": 3, 00:13:18.079 "base_bdevs_list": [ 00:13:18.079 { 00:13:18.079 "name": null, 00:13:18.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.079 "is_configured": false, 00:13:18.079 "data_offset": 0, 00:13:18.079 "data_size": 65536 00:13:18.079 }, 00:13:18.079 { 00:13:18.079 "name": "BaseBdev2", 00:13:18.079 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:18.079 "is_configured": true, 00:13:18.079 "data_offset": 0, 00:13:18.079 "data_size": 65536 00:13:18.079 }, 00:13:18.079 { 00:13:18.079 "name": "BaseBdev3", 00:13:18.079 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:18.079 "is_configured": true, 00:13:18.079 "data_offset": 0, 00:13:18.079 "data_size": 65536 00:13:18.079 }, 00:13:18.079 { 00:13:18.079 "name": "BaseBdev4", 00:13:18.079 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:18.079 "is_configured": true, 00:13:18.079 "data_offset": 0, 00:13:18.079 "data_size": 65536 00:13:18.079 } 00:13:18.079 ] 00:13:18.079 }' 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.079 14:38:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.645 14:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.645 14:38:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.645 14:38:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.645 [2024-10-01 14:38:10.050358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.645 [2024-10-01 14:38:10.060185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:13:18.645 14:38:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.645 14:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:18.645 [2024-10-01 14:38:10.066956] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.585 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.585 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.585 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.585 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.585 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.585 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.585 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.585 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.585 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.585 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.585 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.585 "name": "raid_bdev1", 00:13:19.585 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:19.585 "strip_size_kb": 64, 00:13:19.585 "state": "online", 00:13:19.585 "raid_level": "raid5f", 00:13:19.585 "superblock": false, 00:13:19.585 "num_base_bdevs": 4, 00:13:19.585 "num_base_bdevs_discovered": 4, 00:13:19.585 "num_base_bdevs_operational": 4, 00:13:19.585 "process": { 00:13:19.585 "type": "rebuild", 00:13:19.585 "target": "spare", 00:13:19.585 "progress": { 00:13:19.585 "blocks": 19200, 00:13:19.585 "percent": 9 00:13:19.585 } 00:13:19.585 }, 00:13:19.585 "base_bdevs_list": [ 00:13:19.585 { 00:13:19.585 "name": "spare", 00:13:19.585 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:19.585 "is_configured": true, 00:13:19.585 "data_offset": 0, 00:13:19.585 "data_size": 65536 00:13:19.585 }, 00:13:19.585 { 00:13:19.585 "name": "BaseBdev2", 00:13:19.586 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:19.586 "is_configured": true, 00:13:19.586 "data_offset": 0, 00:13:19.586 "data_size": 65536 00:13:19.586 }, 00:13:19.586 { 00:13:19.586 "name": "BaseBdev3", 00:13:19.586 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:19.586 "is_configured": true, 00:13:19.586 "data_offset": 0, 00:13:19.586 "data_size": 65536 00:13:19.586 }, 00:13:19.586 { 00:13:19.586 "name": "BaseBdev4", 00:13:19.586 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:19.586 "is_configured": true, 00:13:19.586 "data_offset": 0, 00:13:19.586 "data_size": 65536 00:13:19.586 } 00:13:19.586 ] 00:13:19.586 }' 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.586 [2024-10-01 14:38:11.172165] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.586 [2024-10-01 14:38:11.175918] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:19.586 [2024-10-01 14:38:11.175990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.586 [2024-10-01 14:38:11.176009] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.586 [2024-10-01 14:38:11.176019] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.586 "name": "raid_bdev1", 00:13:19.586 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:19.586 "strip_size_kb": 64, 00:13:19.586 "state": "online", 00:13:19.586 "raid_level": "raid5f", 00:13:19.586 "superblock": false, 00:13:19.586 "num_base_bdevs": 4, 00:13:19.586 "num_base_bdevs_discovered": 3, 00:13:19.586 "num_base_bdevs_operational": 3, 00:13:19.586 "base_bdevs_list": [ 00:13:19.586 { 00:13:19.586 "name": null, 00:13:19.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.586 "is_configured": false, 00:13:19.586 "data_offset": 0, 00:13:19.586 "data_size": 65536 00:13:19.586 }, 00:13:19.586 { 00:13:19.586 "name": "BaseBdev2", 00:13:19.586 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:19.586 "is_configured": true, 00:13:19.586 "data_offset": 0, 00:13:19.586 "data_size": 65536 00:13:19.586 }, 00:13:19.586 { 00:13:19.586 "name": "BaseBdev3", 00:13:19.586 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:19.586 "is_configured": true, 00:13:19.586 "data_offset": 0, 00:13:19.586 "data_size": 65536 00:13:19.586 }, 00:13:19.586 { 00:13:19.586 "name": "BaseBdev4", 00:13:19.586 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:19.586 "is_configured": true, 00:13:19.586 "data_offset": 0, 00:13:19.586 "data_size": 65536 00:13:19.586 } 00:13:19.586 ] 00:13:19.586 }' 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.586 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.846 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:19.846 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.846 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:19.846 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:19.846 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.107 "name": "raid_bdev1", 00:13:20.107 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:20.107 "strip_size_kb": 64, 00:13:20.107 "state": "online", 00:13:20.107 "raid_level": "raid5f", 00:13:20.107 "superblock": false, 00:13:20.107 "num_base_bdevs": 4, 00:13:20.107 "num_base_bdevs_discovered": 3, 00:13:20.107 "num_base_bdevs_operational": 3, 00:13:20.107 "base_bdevs_list": [ 00:13:20.107 { 00:13:20.107 "name": null, 00:13:20.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.107 "is_configured": false, 00:13:20.107 "data_offset": 0, 00:13:20.107 "data_size": 65536 00:13:20.107 }, 00:13:20.107 { 00:13:20.107 "name": "BaseBdev2", 00:13:20.107 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:20.107 "is_configured": true, 00:13:20.107 "data_offset": 0, 00:13:20.107 "data_size": 65536 00:13:20.107 }, 00:13:20.107 { 00:13:20.107 "name": "BaseBdev3", 00:13:20.107 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:20.107 "is_configured": true, 00:13:20.107 "data_offset": 0, 00:13:20.107 "data_size": 65536 00:13:20.107 }, 00:13:20.107 { 00:13:20.107 "name": "BaseBdev4", 00:13:20.107 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:20.107 "is_configured": true, 00:13:20.107 "data_offset": 0, 00:13:20.107 "data_size": 65536 00:13:20.107 } 00:13:20.107 ] 00:13:20.107 }' 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.107 [2024-10-01 14:38:11.639270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.107 [2024-10-01 14:38:11.648155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.107 14:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:20.107 [2024-10-01 14:38:11.654597] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.046 "name": "raid_bdev1", 00:13:21.046 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:21.046 "strip_size_kb": 64, 00:13:21.046 "state": "online", 00:13:21.046 "raid_level": "raid5f", 00:13:21.046 "superblock": false, 00:13:21.046 "num_base_bdevs": 4, 00:13:21.046 "num_base_bdevs_discovered": 4, 00:13:21.046 "num_base_bdevs_operational": 4, 00:13:21.046 "process": { 00:13:21.046 "type": "rebuild", 00:13:21.046 "target": "spare", 00:13:21.046 "progress": { 00:13:21.046 "blocks": 19200, 00:13:21.046 "percent": 9 00:13:21.046 } 00:13:21.046 }, 00:13:21.046 "base_bdevs_list": [ 00:13:21.046 { 00:13:21.046 "name": "spare", 00:13:21.046 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:21.046 "is_configured": true, 00:13:21.046 "data_offset": 0, 00:13:21.046 "data_size": 65536 00:13:21.046 }, 00:13:21.046 { 00:13:21.046 "name": "BaseBdev2", 00:13:21.046 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:21.046 "is_configured": true, 00:13:21.046 "data_offset": 0, 00:13:21.046 "data_size": 65536 00:13:21.046 }, 00:13:21.046 { 00:13:21.046 "name": "BaseBdev3", 00:13:21.046 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:21.046 "is_configured": true, 00:13:21.046 "data_offset": 0, 00:13:21.046 "data_size": 65536 00:13:21.046 }, 00:13:21.046 { 00:13:21.046 "name": "BaseBdev4", 00:13:21.046 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:21.046 "is_configured": true, 00:13:21.046 "data_offset": 0, 00:13:21.046 "data_size": 65536 00:13:21.046 } 00:13:21.046 ] 00:13:21.046 }' 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.046 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=512 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.367 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.367 "name": "raid_bdev1", 00:13:21.367 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:21.367 "strip_size_kb": 64, 00:13:21.367 "state": "online", 00:13:21.367 "raid_level": "raid5f", 00:13:21.367 "superblock": false, 00:13:21.367 "num_base_bdevs": 4, 00:13:21.367 "num_base_bdevs_discovered": 4, 00:13:21.367 "num_base_bdevs_operational": 4, 00:13:21.367 "process": { 00:13:21.367 "type": "rebuild", 00:13:21.367 "target": "spare", 00:13:21.367 "progress": { 00:13:21.367 "blocks": 21120, 00:13:21.367 "percent": 10 00:13:21.367 } 00:13:21.367 }, 00:13:21.367 "base_bdevs_list": [ 00:13:21.367 { 00:13:21.367 "name": "spare", 00:13:21.367 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:21.367 "is_configured": true, 00:13:21.367 "data_offset": 0, 00:13:21.367 "data_size": 65536 00:13:21.367 }, 00:13:21.367 { 00:13:21.367 "name": "BaseBdev2", 00:13:21.367 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:21.367 "is_configured": true, 00:13:21.367 "data_offset": 0, 00:13:21.367 "data_size": 65536 00:13:21.367 }, 00:13:21.367 { 00:13:21.367 "name": "BaseBdev3", 00:13:21.367 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:21.367 "is_configured": true, 00:13:21.367 "data_offset": 0, 00:13:21.367 "data_size": 65536 00:13:21.367 }, 00:13:21.367 { 00:13:21.367 "name": "BaseBdev4", 00:13:21.368 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:21.368 "is_configured": true, 00:13:21.368 "data_offset": 0, 00:13:21.368 "data_size": 65536 00:13:21.368 } 00:13:21.368 ] 00:13:21.368 }' 00:13:21.368 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.368 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.368 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.368 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.368 14:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.308 "name": "raid_bdev1", 00:13:22.308 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:22.308 "strip_size_kb": 64, 00:13:22.308 "state": "online", 00:13:22.308 "raid_level": "raid5f", 00:13:22.308 "superblock": false, 00:13:22.308 "num_base_bdevs": 4, 00:13:22.308 "num_base_bdevs_discovered": 4, 00:13:22.308 "num_base_bdevs_operational": 4, 00:13:22.308 "process": { 00:13:22.308 "type": "rebuild", 00:13:22.308 "target": "spare", 00:13:22.308 "progress": { 00:13:22.308 "blocks": 40320, 00:13:22.308 "percent": 20 00:13:22.308 } 00:13:22.308 }, 00:13:22.308 "base_bdevs_list": [ 00:13:22.308 { 00:13:22.308 "name": "spare", 00:13:22.308 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:22.308 "is_configured": true, 00:13:22.308 "data_offset": 0, 00:13:22.308 "data_size": 65536 00:13:22.308 }, 00:13:22.308 { 00:13:22.308 "name": "BaseBdev2", 00:13:22.308 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:22.308 "is_configured": true, 00:13:22.308 "data_offset": 0, 00:13:22.308 "data_size": 65536 00:13:22.308 }, 00:13:22.308 { 00:13:22.308 "name": "BaseBdev3", 00:13:22.308 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:22.308 "is_configured": true, 00:13:22.308 "data_offset": 0, 00:13:22.308 "data_size": 65536 00:13:22.308 }, 00:13:22.308 { 00:13:22.308 "name": "BaseBdev4", 00:13:22.308 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:22.308 "is_configured": true, 00:13:22.308 "data_offset": 0, 00:13:22.308 "data_size": 65536 00:13:22.308 } 00:13:22.308 ] 00:13:22.308 }' 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.308 14:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:23.688 14:38:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.688 14:38:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.688 14:38:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.688 14:38:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.688 14:38:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.688 14:38:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.688 14:38:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.688 14:38:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.688 14:38:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.688 14:38:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.688 14:38:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.688 14:38:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.688 "name": "raid_bdev1", 00:13:23.688 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:23.688 "strip_size_kb": 64, 00:13:23.688 "state": "online", 00:13:23.688 "raid_level": "raid5f", 00:13:23.689 "superblock": false, 00:13:23.689 "num_base_bdevs": 4, 00:13:23.689 "num_base_bdevs_discovered": 4, 00:13:23.689 "num_base_bdevs_operational": 4, 00:13:23.689 "process": { 00:13:23.689 "type": "rebuild", 00:13:23.689 "target": "spare", 00:13:23.689 "progress": { 00:13:23.689 "blocks": 61440, 00:13:23.689 "percent": 31 00:13:23.689 } 00:13:23.689 }, 00:13:23.689 "base_bdevs_list": [ 00:13:23.689 { 00:13:23.689 "name": "spare", 00:13:23.689 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:23.689 "is_configured": true, 00:13:23.689 "data_offset": 0, 00:13:23.689 "data_size": 65536 00:13:23.689 }, 00:13:23.689 { 00:13:23.689 "name": "BaseBdev2", 00:13:23.689 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:23.689 "is_configured": true, 00:13:23.689 "data_offset": 0, 00:13:23.689 "data_size": 65536 00:13:23.689 }, 00:13:23.689 { 00:13:23.689 "name": "BaseBdev3", 00:13:23.689 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:23.689 "is_configured": true, 00:13:23.689 "data_offset": 0, 00:13:23.689 "data_size": 65536 00:13:23.689 }, 00:13:23.689 { 00:13:23.689 "name": "BaseBdev4", 00:13:23.689 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:23.689 "is_configured": true, 00:13:23.689 "data_offset": 0, 00:13:23.689 "data_size": 65536 00:13:23.689 } 00:13:23.689 ] 00:13:23.689 }' 00:13:23.689 14:38:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.689 14:38:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.689 14:38:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.689 14:38:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.689 14:38:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.630 "name": "raid_bdev1", 00:13:24.630 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:24.630 "strip_size_kb": 64, 00:13:24.630 "state": "online", 00:13:24.630 "raid_level": "raid5f", 00:13:24.630 "superblock": false, 00:13:24.630 "num_base_bdevs": 4, 00:13:24.630 "num_base_bdevs_discovered": 4, 00:13:24.630 "num_base_bdevs_operational": 4, 00:13:24.630 "process": { 00:13:24.630 "type": "rebuild", 00:13:24.630 "target": "spare", 00:13:24.630 "progress": { 00:13:24.630 "blocks": 82560, 00:13:24.630 "percent": 41 00:13:24.630 } 00:13:24.630 }, 00:13:24.630 "base_bdevs_list": [ 00:13:24.630 { 00:13:24.630 "name": "spare", 00:13:24.630 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:24.630 "is_configured": true, 00:13:24.630 "data_offset": 0, 00:13:24.630 "data_size": 65536 00:13:24.630 }, 00:13:24.630 { 00:13:24.630 "name": "BaseBdev2", 00:13:24.630 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:24.630 "is_configured": true, 00:13:24.630 "data_offset": 0, 00:13:24.630 "data_size": 65536 00:13:24.630 }, 00:13:24.630 { 00:13:24.630 "name": "BaseBdev3", 00:13:24.630 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:24.630 "is_configured": true, 00:13:24.630 "data_offset": 0, 00:13:24.630 "data_size": 65536 00:13:24.630 }, 00:13:24.630 { 00:13:24.630 "name": "BaseBdev4", 00:13:24.630 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:24.630 "is_configured": true, 00:13:24.630 "data_offset": 0, 00:13:24.630 "data_size": 65536 00:13:24.630 } 00:13:24.630 ] 00:13:24.630 }' 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.630 14:38:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.626 "name": "raid_bdev1", 00:13:25.626 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:25.626 "strip_size_kb": 64, 00:13:25.626 "state": "online", 00:13:25.626 "raid_level": "raid5f", 00:13:25.626 "superblock": false, 00:13:25.626 "num_base_bdevs": 4, 00:13:25.626 "num_base_bdevs_discovered": 4, 00:13:25.626 "num_base_bdevs_operational": 4, 00:13:25.626 "process": { 00:13:25.626 "type": "rebuild", 00:13:25.626 "target": "spare", 00:13:25.626 "progress": { 00:13:25.626 "blocks": 103680, 00:13:25.626 "percent": 52 00:13:25.626 } 00:13:25.626 }, 00:13:25.626 "base_bdevs_list": [ 00:13:25.626 { 00:13:25.626 "name": "spare", 00:13:25.626 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:25.626 "is_configured": true, 00:13:25.626 "data_offset": 0, 00:13:25.626 "data_size": 65536 00:13:25.626 }, 00:13:25.626 { 00:13:25.626 "name": "BaseBdev2", 00:13:25.626 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:25.626 "is_configured": true, 00:13:25.626 "data_offset": 0, 00:13:25.626 "data_size": 65536 00:13:25.626 }, 00:13:25.626 { 00:13:25.626 "name": "BaseBdev3", 00:13:25.626 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:25.626 "is_configured": true, 00:13:25.626 "data_offset": 0, 00:13:25.626 "data_size": 65536 00:13:25.626 }, 00:13:25.626 { 00:13:25.626 "name": "BaseBdev4", 00:13:25.626 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:25.626 "is_configured": true, 00:13:25.626 "data_offset": 0, 00:13:25.626 "data_size": 65536 00:13:25.626 } 00:13:25.626 ] 00:13:25.626 }' 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.626 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.884 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.884 14:38:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.816 "name": "raid_bdev1", 00:13:26.816 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:26.816 "strip_size_kb": 64, 00:13:26.816 "state": "online", 00:13:26.816 "raid_level": "raid5f", 00:13:26.816 "superblock": false, 00:13:26.816 "num_base_bdevs": 4, 00:13:26.816 "num_base_bdevs_discovered": 4, 00:13:26.816 "num_base_bdevs_operational": 4, 00:13:26.816 "process": { 00:13:26.816 "type": "rebuild", 00:13:26.816 "target": "spare", 00:13:26.816 "progress": { 00:13:26.816 "blocks": 124800, 00:13:26.816 "percent": 63 00:13:26.816 } 00:13:26.816 }, 00:13:26.816 "base_bdevs_list": [ 00:13:26.816 { 00:13:26.816 "name": "spare", 00:13:26.816 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:26.816 "is_configured": true, 00:13:26.816 "data_offset": 0, 00:13:26.816 "data_size": 65536 00:13:26.816 }, 00:13:26.816 { 00:13:26.816 "name": "BaseBdev2", 00:13:26.816 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:26.816 "is_configured": true, 00:13:26.816 "data_offset": 0, 00:13:26.816 "data_size": 65536 00:13:26.816 }, 00:13:26.816 { 00:13:26.816 "name": "BaseBdev3", 00:13:26.816 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:26.816 "is_configured": true, 00:13:26.816 "data_offset": 0, 00:13:26.816 "data_size": 65536 00:13:26.816 }, 00:13:26.816 { 00:13:26.816 "name": "BaseBdev4", 00:13:26.816 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:26.816 "is_configured": true, 00:13:26.816 "data_offset": 0, 00:13:26.816 "data_size": 65536 00:13:26.816 } 00:13:26.816 ] 00:13:26.816 }' 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.816 14:38:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.747 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.747 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.747 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.747 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.747 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.747 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.747 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.747 14:38:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.747 14:38:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.747 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.747 14:38:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.747 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.747 "name": "raid_bdev1", 00:13:27.747 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:27.747 "strip_size_kb": 64, 00:13:27.747 "state": "online", 00:13:27.747 "raid_level": "raid5f", 00:13:27.747 "superblock": false, 00:13:27.747 "num_base_bdevs": 4, 00:13:27.747 "num_base_bdevs_discovered": 4, 00:13:27.747 "num_base_bdevs_operational": 4, 00:13:27.747 "process": { 00:13:27.747 "type": "rebuild", 00:13:27.747 "target": "spare", 00:13:27.747 "progress": { 00:13:27.747 "blocks": 145920, 00:13:27.747 "percent": 74 00:13:27.747 } 00:13:27.747 }, 00:13:27.747 "base_bdevs_list": [ 00:13:27.747 { 00:13:27.747 "name": "spare", 00:13:27.747 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:27.747 "is_configured": true, 00:13:27.747 "data_offset": 0, 00:13:27.747 "data_size": 65536 00:13:27.747 }, 00:13:27.747 { 00:13:27.747 "name": "BaseBdev2", 00:13:27.747 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:27.747 "is_configured": true, 00:13:27.747 "data_offset": 0, 00:13:27.747 "data_size": 65536 00:13:27.747 }, 00:13:27.747 { 00:13:27.747 "name": "BaseBdev3", 00:13:27.747 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:27.747 "is_configured": true, 00:13:27.747 "data_offset": 0, 00:13:27.747 "data_size": 65536 00:13:27.747 }, 00:13:27.747 { 00:13:27.747 "name": "BaseBdev4", 00:13:27.747 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:27.747 "is_configured": true, 00:13:27.747 "data_offset": 0, 00:13:27.747 "data_size": 65536 00:13:27.747 } 00:13:27.747 ] 00:13:27.747 }' 00:13:27.747 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.003 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.003 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.003 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.003 14:38:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:28.934 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.934 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.934 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.934 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.934 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.934 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.935 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.935 14:38:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.935 14:38:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.935 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.935 14:38:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.935 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.935 "name": "raid_bdev1", 00:13:28.935 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:28.935 "strip_size_kb": 64, 00:13:28.935 "state": "online", 00:13:28.935 "raid_level": "raid5f", 00:13:28.935 "superblock": false, 00:13:28.935 "num_base_bdevs": 4, 00:13:28.935 "num_base_bdevs_discovered": 4, 00:13:28.935 "num_base_bdevs_operational": 4, 00:13:28.935 "process": { 00:13:28.935 "type": "rebuild", 00:13:28.935 "target": "spare", 00:13:28.935 "progress": { 00:13:28.935 "blocks": 167040, 00:13:28.935 "percent": 84 00:13:28.935 } 00:13:28.935 }, 00:13:28.935 "base_bdevs_list": [ 00:13:28.935 { 00:13:28.935 "name": "spare", 00:13:28.935 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:28.935 "is_configured": true, 00:13:28.935 "data_offset": 0, 00:13:28.935 "data_size": 65536 00:13:28.935 }, 00:13:28.935 { 00:13:28.935 "name": "BaseBdev2", 00:13:28.935 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:28.935 "is_configured": true, 00:13:28.935 "data_offset": 0, 00:13:28.935 "data_size": 65536 00:13:28.935 }, 00:13:28.935 { 00:13:28.935 "name": "BaseBdev3", 00:13:28.935 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:28.935 "is_configured": true, 00:13:28.935 "data_offset": 0, 00:13:28.935 "data_size": 65536 00:13:28.935 }, 00:13:28.935 { 00:13:28.935 "name": "BaseBdev4", 00:13:28.935 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:28.935 "is_configured": true, 00:13:28.935 "data_offset": 0, 00:13:28.935 "data_size": 65536 00:13:28.935 } 00:13:28.935 ] 00:13:28.935 }' 00:13:28.935 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.935 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.935 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.935 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.935 14:38:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.307 "name": "raid_bdev1", 00:13:30.307 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:30.307 "strip_size_kb": 64, 00:13:30.307 "state": "online", 00:13:30.307 "raid_level": "raid5f", 00:13:30.307 "superblock": false, 00:13:30.307 "num_base_bdevs": 4, 00:13:30.307 "num_base_bdevs_discovered": 4, 00:13:30.307 "num_base_bdevs_operational": 4, 00:13:30.307 "process": { 00:13:30.307 "type": "rebuild", 00:13:30.307 "target": "spare", 00:13:30.307 "progress": { 00:13:30.307 "blocks": 188160, 00:13:30.307 "percent": 95 00:13:30.307 } 00:13:30.307 }, 00:13:30.307 "base_bdevs_list": [ 00:13:30.307 { 00:13:30.307 "name": "spare", 00:13:30.307 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:30.307 "is_configured": true, 00:13:30.307 "data_offset": 0, 00:13:30.307 "data_size": 65536 00:13:30.307 }, 00:13:30.307 { 00:13:30.307 "name": "BaseBdev2", 00:13:30.307 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:30.307 "is_configured": true, 00:13:30.307 "data_offset": 0, 00:13:30.307 "data_size": 65536 00:13:30.307 }, 00:13:30.307 { 00:13:30.307 "name": "BaseBdev3", 00:13:30.307 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:30.307 "is_configured": true, 00:13:30.307 "data_offset": 0, 00:13:30.307 "data_size": 65536 00:13:30.307 }, 00:13:30.307 { 00:13:30.307 "name": "BaseBdev4", 00:13:30.307 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:30.307 "is_configured": true, 00:13:30.307 "data_offset": 0, 00:13:30.307 "data_size": 65536 00:13:30.307 } 00:13:30.307 ] 00:13:30.307 }' 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.307 14:38:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:30.564 [2024-10-01 14:38:22.034750] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:30.564 [2024-10-01 14:38:22.034830] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:30.564 [2024-10-01 14:38:22.034875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.128 "name": "raid_bdev1", 00:13:31.128 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:31.128 "strip_size_kb": 64, 00:13:31.128 "state": "online", 00:13:31.128 "raid_level": "raid5f", 00:13:31.128 "superblock": false, 00:13:31.128 "num_base_bdevs": 4, 00:13:31.128 "num_base_bdevs_discovered": 4, 00:13:31.128 "num_base_bdevs_operational": 4, 00:13:31.128 "base_bdevs_list": [ 00:13:31.128 { 00:13:31.128 "name": "spare", 00:13:31.128 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:31.128 "is_configured": true, 00:13:31.128 "data_offset": 0, 00:13:31.128 "data_size": 65536 00:13:31.128 }, 00:13:31.128 { 00:13:31.128 "name": "BaseBdev2", 00:13:31.128 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:31.128 "is_configured": true, 00:13:31.128 "data_offset": 0, 00:13:31.128 "data_size": 65536 00:13:31.128 }, 00:13:31.128 { 00:13:31.128 "name": "BaseBdev3", 00:13:31.128 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:31.128 "is_configured": true, 00:13:31.128 "data_offset": 0, 00:13:31.128 "data_size": 65536 00:13:31.128 }, 00:13:31.128 { 00:13:31.128 "name": "BaseBdev4", 00:13:31.128 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:31.128 "is_configured": true, 00:13:31.128 "data_offset": 0, 00:13:31.128 "data_size": 65536 00:13:31.128 } 00:13:31.128 ] 00:13:31.128 }' 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:31.128 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.386 "name": "raid_bdev1", 00:13:31.386 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:31.386 "strip_size_kb": 64, 00:13:31.386 "state": "online", 00:13:31.386 "raid_level": "raid5f", 00:13:31.386 "superblock": false, 00:13:31.386 "num_base_bdevs": 4, 00:13:31.386 "num_base_bdevs_discovered": 4, 00:13:31.386 "num_base_bdevs_operational": 4, 00:13:31.386 "base_bdevs_list": [ 00:13:31.386 { 00:13:31.386 "name": "spare", 00:13:31.386 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:31.386 "is_configured": true, 00:13:31.386 "data_offset": 0, 00:13:31.386 "data_size": 65536 00:13:31.386 }, 00:13:31.386 { 00:13:31.386 "name": "BaseBdev2", 00:13:31.386 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:31.386 "is_configured": true, 00:13:31.386 "data_offset": 0, 00:13:31.386 "data_size": 65536 00:13:31.386 }, 00:13:31.386 { 00:13:31.386 "name": "BaseBdev3", 00:13:31.386 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:31.386 "is_configured": true, 00:13:31.386 "data_offset": 0, 00:13:31.386 "data_size": 65536 00:13:31.386 }, 00:13:31.386 { 00:13:31.386 "name": "BaseBdev4", 00:13:31.386 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:31.386 "is_configured": true, 00:13:31.386 "data_offset": 0, 00:13:31.386 "data_size": 65536 00:13:31.386 } 00:13:31.386 ] 00:13:31.386 }' 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.386 "name": "raid_bdev1", 00:13:31.386 "uuid": "c2630ba2-901d-4376-923e-6d6d5c55a64e", 00:13:31.386 "strip_size_kb": 64, 00:13:31.386 "state": "online", 00:13:31.386 "raid_level": "raid5f", 00:13:31.386 "superblock": false, 00:13:31.386 "num_base_bdevs": 4, 00:13:31.386 "num_base_bdevs_discovered": 4, 00:13:31.386 "num_base_bdevs_operational": 4, 00:13:31.386 "base_bdevs_list": [ 00:13:31.386 { 00:13:31.386 "name": "spare", 00:13:31.386 "uuid": "30188c4a-4b3a-5c98-9c9c-66dd04c109c7", 00:13:31.386 "is_configured": true, 00:13:31.386 "data_offset": 0, 00:13:31.386 "data_size": 65536 00:13:31.386 }, 00:13:31.386 { 00:13:31.386 "name": "BaseBdev2", 00:13:31.386 "uuid": "3c0bea03-1178-5b52-a2f6-0316d589486e", 00:13:31.386 "is_configured": true, 00:13:31.386 "data_offset": 0, 00:13:31.386 "data_size": 65536 00:13:31.386 }, 00:13:31.386 { 00:13:31.386 "name": "BaseBdev3", 00:13:31.386 "uuid": "d425f0b6-ff2b-53f7-9983-7ae642883fc2", 00:13:31.386 "is_configured": true, 00:13:31.386 "data_offset": 0, 00:13:31.386 "data_size": 65536 00:13:31.386 }, 00:13:31.386 { 00:13:31.386 "name": "BaseBdev4", 00:13:31.386 "uuid": "fe6da438-ef9c-5047-b5c2-925016d47ee0", 00:13:31.386 "is_configured": true, 00:13:31.386 "data_offset": 0, 00:13:31.386 "data_size": 65536 00:13:31.386 } 00:13:31.386 ] 00:13:31.386 }' 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.386 14:38:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.644 [2024-10-01 14:38:23.262396] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.644 [2024-10-01 14:38:23.262439] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.644 [2024-10-01 14:38:23.262517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.644 [2024-10-01 14:38:23.262622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.644 [2024-10-01 14:38:23.262640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:31.644 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:31.902 /dev/nbd0 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.902 1+0 records in 00:13:31.902 1+0 records out 00:13:31.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297894 s, 13.7 MB/s 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:31.902 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:32.219 /dev/nbd1 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.219 1+0 records in 00:13:32.219 1+0 records out 00:13:32.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212178 s, 19.3 MB/s 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:32.219 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:32.476 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:32.476 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.476 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:32.476 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.476 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:32.476 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.476 14:38:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:32.476 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:32.476 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:32.476 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:32.476 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.476 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.476 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:32.476 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:32.476 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.476 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.476 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:32.733 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82455 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 82455 ']' 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 82455 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82455 00:13:32.734 killing process with pid 82455 00:13:32.734 Received shutdown signal, test time was about 60.000000 seconds 00:13:32.734 00:13:32.734 Latency(us) 00:13:32.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.734 =================================================================================================================== 00:13:32.734 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82455' 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 82455 00:13:32.734 [2024-10-01 14:38:24.383562] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.734 14:38:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 82455 00:13:33.298 [2024-10-01 14:38:24.693048] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.863 14:38:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:33.863 00:13:33.863 real 0m18.396s 00:13:33.863 user 0m21.556s 00:13:33.863 sys 0m1.883s 00:13:33.863 14:38:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.863 ************************************ 00:13:33.863 END TEST raid5f_rebuild_test 00:13:33.863 ************************************ 00:13:33.863 14:38:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.121 14:38:25 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:13:34.121 14:38:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:34.121 14:38:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:34.121 14:38:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.121 ************************************ 00:13:34.121 START TEST raid5f_rebuild_test_sb 00:13:34.121 ************************************ 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82960 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82960 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82960 ']' 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.121 14:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:34.121 [2024-10-01 14:38:25.636756] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:13:34.121 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:34.121 Zero copy mechanism will not be used. 00:13:34.121 [2024-10-01 14:38:25.637426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82960 ] 00:13:34.121 [2024-10-01 14:38:25.787635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.379 [2024-10-01 14:38:25.950482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.379 [2024-10-01 14:38:26.061985] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.379 [2024-10-01 14:38:26.062019] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.944 BaseBdev1_malloc 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.944 [2024-10-01 14:38:26.517835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:34.944 [2024-10-01 14:38:26.517891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.944 [2024-10-01 14:38:26.517916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:34.944 [2024-10-01 14:38:26.517928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.944 [2024-10-01 14:38:26.519724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.944 [2024-10-01 14:38:26.519758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:34.944 BaseBdev1 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.944 BaseBdev2_malloc 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.944 [2024-10-01 14:38:26.562801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:34.944 [2024-10-01 14:38:26.562852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.944 [2024-10-01 14:38:26.562868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:34.944 [2024-10-01 14:38:26.562877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.944 [2024-10-01 14:38:26.564627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.944 [2024-10-01 14:38:26.564662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:34.944 BaseBdev2 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.944 BaseBdev3_malloc 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.944 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.945 [2024-10-01 14:38:26.594720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:34.945 [2024-10-01 14:38:26.594762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.945 [2024-10-01 14:38:26.594779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:34.945 [2024-10-01 14:38:26.594787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.945 [2024-10-01 14:38:26.596505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.945 [2024-10-01 14:38:26.596537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:34.945 BaseBdev3 00:13:34.945 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.945 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.945 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:34.945 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.945 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.945 BaseBdev4_malloc 00:13:34.945 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.945 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:34.945 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.945 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.945 [2024-10-01 14:38:26.626711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:34.945 [2024-10-01 14:38:26.626753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.945 [2024-10-01 14:38:26.626767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:34.945 [2024-10-01 14:38:26.626776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.203 [2024-10-01 14:38:26.628510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.203 [2024-10-01 14:38:26.628543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:35.203 BaseBdev4 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.203 spare_malloc 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.203 spare_delay 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.203 [2024-10-01 14:38:26.666518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:35.203 [2024-10-01 14:38:26.666574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.203 [2024-10-01 14:38:26.666595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:35.203 [2024-10-01 14:38:26.666609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.203 [2024-10-01 14:38:26.669064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.203 [2024-10-01 14:38:26.669111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:35.203 spare 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.203 [2024-10-01 14:38:26.674592] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.203 [2024-10-01 14:38:26.676244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.203 [2024-10-01 14:38:26.676300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.203 [2024-10-01 14:38:26.676343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:35.203 [2024-10-01 14:38:26.676490] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:35.203 [2024-10-01 14:38:26.676507] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:35.203 [2024-10-01 14:38:26.676728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:35.203 [2024-10-01 14:38:26.680772] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:35.203 [2024-10-01 14:38:26.680791] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:35.203 [2024-10-01 14:38:26.680932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.203 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.203 "name": "raid_bdev1", 00:13:35.203 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:35.203 "strip_size_kb": 64, 00:13:35.203 "state": "online", 00:13:35.203 "raid_level": "raid5f", 00:13:35.203 "superblock": true, 00:13:35.203 "num_base_bdevs": 4, 00:13:35.203 "num_base_bdevs_discovered": 4, 00:13:35.204 "num_base_bdevs_operational": 4, 00:13:35.204 "base_bdevs_list": [ 00:13:35.204 { 00:13:35.204 "name": "BaseBdev1", 00:13:35.204 "uuid": "4760f4a3-420c-522e-9178-b6a0d3231b91", 00:13:35.204 "is_configured": true, 00:13:35.204 "data_offset": 2048, 00:13:35.204 "data_size": 63488 00:13:35.204 }, 00:13:35.204 { 00:13:35.204 "name": "BaseBdev2", 00:13:35.204 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:35.204 "is_configured": true, 00:13:35.204 "data_offset": 2048, 00:13:35.204 "data_size": 63488 00:13:35.204 }, 00:13:35.204 { 00:13:35.204 "name": "BaseBdev3", 00:13:35.204 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:35.204 "is_configured": true, 00:13:35.204 "data_offset": 2048, 00:13:35.204 "data_size": 63488 00:13:35.204 }, 00:13:35.204 { 00:13:35.204 "name": "BaseBdev4", 00:13:35.204 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:35.204 "is_configured": true, 00:13:35.204 "data_offset": 2048, 00:13:35.204 "data_size": 63488 00:13:35.204 } 00:13:35.204 ] 00:13:35.204 }' 00:13:35.204 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.204 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.460 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.460 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.460 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.460 14:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:35.460 [2024-10-01 14:38:26.993532] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.460 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:35.717 [2024-10-01 14:38:27.241450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:35.717 /dev/nbd0 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.717 1+0 records in 00:13:35.717 1+0 records out 00:13:35.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251506 s, 16.3 MB/s 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:13:35.717 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:13:36.281 496+0 records in 00:13:36.281 496+0 records out 00:13:36.281 97517568 bytes (98 MB, 93 MiB) copied, 0.464329 s, 210 MB/s 00:13:36.281 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:36.281 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.281 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:36.281 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.281 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:36.281 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.281 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:36.281 [2024-10-01 14:38:27.948327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.538 [2024-10-01 14:38:27.980918] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.538 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.539 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.539 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.539 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.539 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.539 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.539 14:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.539 14:38:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.539 14:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.539 "name": "raid_bdev1", 00:13:36.539 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:36.539 "strip_size_kb": 64, 00:13:36.539 "state": "online", 00:13:36.539 "raid_level": "raid5f", 00:13:36.539 "superblock": true, 00:13:36.539 "num_base_bdevs": 4, 00:13:36.539 "num_base_bdevs_discovered": 3, 00:13:36.539 "num_base_bdevs_operational": 3, 00:13:36.539 "base_bdevs_list": [ 00:13:36.539 { 00:13:36.539 "name": null, 00:13:36.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.539 "is_configured": false, 00:13:36.539 "data_offset": 0, 00:13:36.539 "data_size": 63488 00:13:36.539 }, 00:13:36.539 { 00:13:36.539 "name": "BaseBdev2", 00:13:36.539 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:36.539 "is_configured": true, 00:13:36.539 "data_offset": 2048, 00:13:36.539 "data_size": 63488 00:13:36.539 }, 00:13:36.539 { 00:13:36.539 "name": "BaseBdev3", 00:13:36.539 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:36.539 "is_configured": true, 00:13:36.539 "data_offset": 2048, 00:13:36.539 "data_size": 63488 00:13:36.539 }, 00:13:36.539 { 00:13:36.539 "name": "BaseBdev4", 00:13:36.539 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:36.539 "is_configured": true, 00:13:36.539 "data_offset": 2048, 00:13:36.539 "data_size": 63488 00:13:36.539 } 00:13:36.539 ] 00:13:36.539 }' 00:13:36.539 14:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.539 14:38:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.796 14:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:36.796 14:38:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.796 14:38:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.796 [2024-10-01 14:38:28.293020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.796 [2024-10-01 14:38:28.301790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:13:36.796 14:38:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.796 14:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:36.796 [2024-10-01 14:38:28.307494] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.729 "name": "raid_bdev1", 00:13:37.729 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:37.729 "strip_size_kb": 64, 00:13:37.729 "state": "online", 00:13:37.729 "raid_level": "raid5f", 00:13:37.729 "superblock": true, 00:13:37.729 "num_base_bdevs": 4, 00:13:37.729 "num_base_bdevs_discovered": 4, 00:13:37.729 "num_base_bdevs_operational": 4, 00:13:37.729 "process": { 00:13:37.729 "type": "rebuild", 00:13:37.729 "target": "spare", 00:13:37.729 "progress": { 00:13:37.729 "blocks": 19200, 00:13:37.729 "percent": 10 00:13:37.729 } 00:13:37.729 }, 00:13:37.729 "base_bdevs_list": [ 00:13:37.729 { 00:13:37.729 "name": "spare", 00:13:37.729 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:37.729 "is_configured": true, 00:13:37.729 "data_offset": 2048, 00:13:37.729 "data_size": 63488 00:13:37.729 }, 00:13:37.729 { 00:13:37.729 "name": "BaseBdev2", 00:13:37.729 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:37.729 "is_configured": true, 00:13:37.729 "data_offset": 2048, 00:13:37.729 "data_size": 63488 00:13:37.729 }, 00:13:37.729 { 00:13:37.729 "name": "BaseBdev3", 00:13:37.729 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:37.729 "is_configured": true, 00:13:37.729 "data_offset": 2048, 00:13:37.729 "data_size": 63488 00:13:37.729 }, 00:13:37.729 { 00:13:37.729 "name": "BaseBdev4", 00:13:37.729 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:37.729 "is_configured": true, 00:13:37.729 "data_offset": 2048, 00:13:37.729 "data_size": 63488 00:13:37.729 } 00:13:37.729 ] 00:13:37.729 }' 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.729 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.729 [2024-10-01 14:38:29.404203] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.987 [2024-10-01 14:38:29.415555] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:37.987 [2024-10-01 14:38:29.415611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.987 [2024-10-01 14:38:29.415627] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.987 [2024-10-01 14:38:29.415637] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.987 "name": "raid_bdev1", 00:13:37.987 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:37.987 "strip_size_kb": 64, 00:13:37.987 "state": "online", 00:13:37.987 "raid_level": "raid5f", 00:13:37.987 "superblock": true, 00:13:37.987 "num_base_bdevs": 4, 00:13:37.987 "num_base_bdevs_discovered": 3, 00:13:37.987 "num_base_bdevs_operational": 3, 00:13:37.987 "base_bdevs_list": [ 00:13:37.987 { 00:13:37.987 "name": null, 00:13:37.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.987 "is_configured": false, 00:13:37.987 "data_offset": 0, 00:13:37.987 "data_size": 63488 00:13:37.987 }, 00:13:37.987 { 00:13:37.987 "name": "BaseBdev2", 00:13:37.987 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:37.987 "is_configured": true, 00:13:37.987 "data_offset": 2048, 00:13:37.987 "data_size": 63488 00:13:37.987 }, 00:13:37.987 { 00:13:37.987 "name": "BaseBdev3", 00:13:37.987 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:37.987 "is_configured": true, 00:13:37.987 "data_offset": 2048, 00:13:37.987 "data_size": 63488 00:13:37.987 }, 00:13:37.987 { 00:13:37.987 "name": "BaseBdev4", 00:13:37.987 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:37.987 "is_configured": true, 00:13:37.987 "data_offset": 2048, 00:13:37.987 "data_size": 63488 00:13:37.987 } 00:13:37.987 ] 00:13:37.987 }' 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.987 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.244 "name": "raid_bdev1", 00:13:38.244 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:38.244 "strip_size_kb": 64, 00:13:38.244 "state": "online", 00:13:38.244 "raid_level": "raid5f", 00:13:38.244 "superblock": true, 00:13:38.244 "num_base_bdevs": 4, 00:13:38.244 "num_base_bdevs_discovered": 3, 00:13:38.244 "num_base_bdevs_operational": 3, 00:13:38.244 "base_bdevs_list": [ 00:13:38.244 { 00:13:38.244 "name": null, 00:13:38.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.244 "is_configured": false, 00:13:38.244 "data_offset": 0, 00:13:38.244 "data_size": 63488 00:13:38.244 }, 00:13:38.244 { 00:13:38.244 "name": "BaseBdev2", 00:13:38.244 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:38.244 "is_configured": true, 00:13:38.244 "data_offset": 2048, 00:13:38.244 "data_size": 63488 00:13:38.244 }, 00:13:38.244 { 00:13:38.244 "name": "BaseBdev3", 00:13:38.244 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:38.244 "is_configured": true, 00:13:38.244 "data_offset": 2048, 00:13:38.244 "data_size": 63488 00:13:38.244 }, 00:13:38.244 { 00:13:38.244 "name": "BaseBdev4", 00:13:38.244 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:38.244 "is_configured": true, 00:13:38.244 "data_offset": 2048, 00:13:38.244 "data_size": 63488 00:13:38.244 } 00:13:38.244 ] 00:13:38.244 }' 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.244 [2024-10-01 14:38:29.836347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.244 [2024-10-01 14:38:29.844507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.244 14:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:38.244 [2024-10-01 14:38:29.850176] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:39.232 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.232 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.232 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.232 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.232 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.232 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.232 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.232 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.232 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.232 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.232 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.232 "name": "raid_bdev1", 00:13:39.232 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:39.232 "strip_size_kb": 64, 00:13:39.232 "state": "online", 00:13:39.232 "raid_level": "raid5f", 00:13:39.232 "superblock": true, 00:13:39.232 "num_base_bdevs": 4, 00:13:39.232 "num_base_bdevs_discovered": 4, 00:13:39.232 "num_base_bdevs_operational": 4, 00:13:39.232 "process": { 00:13:39.232 "type": "rebuild", 00:13:39.232 "target": "spare", 00:13:39.232 "progress": { 00:13:39.232 "blocks": 17280, 00:13:39.232 "percent": 9 00:13:39.232 } 00:13:39.232 }, 00:13:39.232 "base_bdevs_list": [ 00:13:39.232 { 00:13:39.232 "name": "spare", 00:13:39.232 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:39.232 "is_configured": true, 00:13:39.232 "data_offset": 2048, 00:13:39.232 "data_size": 63488 00:13:39.232 }, 00:13:39.232 { 00:13:39.232 "name": "BaseBdev2", 00:13:39.232 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:39.232 "is_configured": true, 00:13:39.232 "data_offset": 2048, 00:13:39.232 "data_size": 63488 00:13:39.232 }, 00:13:39.232 { 00:13:39.232 "name": "BaseBdev3", 00:13:39.232 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:39.232 "is_configured": true, 00:13:39.232 "data_offset": 2048, 00:13:39.232 "data_size": 63488 00:13:39.232 }, 00:13:39.232 { 00:13:39.232 "name": "BaseBdev4", 00:13:39.232 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:39.232 "is_configured": true, 00:13:39.232 "data_offset": 2048, 00:13:39.232 "data_size": 63488 00:13:39.232 } 00:13:39.232 ] 00:13:39.232 }' 00:13:39.232 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:39.504 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=530 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.504 "name": "raid_bdev1", 00:13:39.504 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:39.504 "strip_size_kb": 64, 00:13:39.504 "state": "online", 00:13:39.504 "raid_level": "raid5f", 00:13:39.504 "superblock": true, 00:13:39.504 "num_base_bdevs": 4, 00:13:39.504 "num_base_bdevs_discovered": 4, 00:13:39.504 "num_base_bdevs_operational": 4, 00:13:39.504 "process": { 00:13:39.504 "type": "rebuild", 00:13:39.504 "target": "spare", 00:13:39.504 "progress": { 00:13:39.504 "blocks": 21120, 00:13:39.504 "percent": 11 00:13:39.504 } 00:13:39.504 }, 00:13:39.504 "base_bdevs_list": [ 00:13:39.504 { 00:13:39.504 "name": "spare", 00:13:39.504 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:39.504 "is_configured": true, 00:13:39.504 "data_offset": 2048, 00:13:39.504 "data_size": 63488 00:13:39.504 }, 00:13:39.504 { 00:13:39.504 "name": "BaseBdev2", 00:13:39.504 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:39.504 "is_configured": true, 00:13:39.504 "data_offset": 2048, 00:13:39.504 "data_size": 63488 00:13:39.504 }, 00:13:39.504 { 00:13:39.504 "name": "BaseBdev3", 00:13:39.504 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:39.504 "is_configured": true, 00:13:39.504 "data_offset": 2048, 00:13:39.504 "data_size": 63488 00:13:39.504 }, 00:13:39.504 { 00:13:39.504 "name": "BaseBdev4", 00:13:39.504 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:39.504 "is_configured": true, 00:13:39.504 "data_offset": 2048, 00:13:39.504 "data_size": 63488 00:13:39.504 } 00:13:39.504 ] 00:13:39.504 }' 00:13:39.504 14:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.504 14:38:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.504 14:38:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.504 14:38:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.504 14:38:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.436 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.436 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.436 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.436 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.436 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.436 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.436 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.436 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.436 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.436 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.436 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.436 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.436 "name": "raid_bdev1", 00:13:40.436 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:40.436 "strip_size_kb": 64, 00:13:40.436 "state": "online", 00:13:40.436 "raid_level": "raid5f", 00:13:40.436 "superblock": true, 00:13:40.436 "num_base_bdevs": 4, 00:13:40.436 "num_base_bdevs_discovered": 4, 00:13:40.436 "num_base_bdevs_operational": 4, 00:13:40.436 "process": { 00:13:40.436 "type": "rebuild", 00:13:40.436 "target": "spare", 00:13:40.436 "progress": { 00:13:40.436 "blocks": 40320, 00:13:40.436 "percent": 21 00:13:40.436 } 00:13:40.436 }, 00:13:40.436 "base_bdevs_list": [ 00:13:40.436 { 00:13:40.436 "name": "spare", 00:13:40.436 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:40.436 "is_configured": true, 00:13:40.436 "data_offset": 2048, 00:13:40.436 "data_size": 63488 00:13:40.437 }, 00:13:40.437 { 00:13:40.437 "name": "BaseBdev2", 00:13:40.437 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:40.437 "is_configured": true, 00:13:40.437 "data_offset": 2048, 00:13:40.437 "data_size": 63488 00:13:40.437 }, 00:13:40.437 { 00:13:40.437 "name": "BaseBdev3", 00:13:40.437 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:40.437 "is_configured": true, 00:13:40.437 "data_offset": 2048, 00:13:40.437 "data_size": 63488 00:13:40.437 }, 00:13:40.437 { 00:13:40.437 "name": "BaseBdev4", 00:13:40.437 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:40.437 "is_configured": true, 00:13:40.437 "data_offset": 2048, 00:13:40.437 "data_size": 63488 00:13:40.437 } 00:13:40.437 ] 00:13:40.437 }' 00:13:40.437 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.694 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.694 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.694 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.694 14:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.625 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.625 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.625 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.625 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.625 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.625 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.625 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.625 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.625 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.625 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.625 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.625 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.625 "name": "raid_bdev1", 00:13:41.625 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:41.625 "strip_size_kb": 64, 00:13:41.625 "state": "online", 00:13:41.625 "raid_level": "raid5f", 00:13:41.625 "superblock": true, 00:13:41.625 "num_base_bdevs": 4, 00:13:41.625 "num_base_bdevs_discovered": 4, 00:13:41.625 "num_base_bdevs_operational": 4, 00:13:41.625 "process": { 00:13:41.626 "type": "rebuild", 00:13:41.626 "target": "spare", 00:13:41.626 "progress": { 00:13:41.626 "blocks": 61440, 00:13:41.626 "percent": 32 00:13:41.626 } 00:13:41.626 }, 00:13:41.626 "base_bdevs_list": [ 00:13:41.626 { 00:13:41.626 "name": "spare", 00:13:41.626 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:41.626 "is_configured": true, 00:13:41.626 "data_offset": 2048, 00:13:41.626 "data_size": 63488 00:13:41.626 }, 00:13:41.626 { 00:13:41.626 "name": "BaseBdev2", 00:13:41.626 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:41.626 "is_configured": true, 00:13:41.626 "data_offset": 2048, 00:13:41.626 "data_size": 63488 00:13:41.626 }, 00:13:41.626 { 00:13:41.626 "name": "BaseBdev3", 00:13:41.626 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:41.626 "is_configured": true, 00:13:41.626 "data_offset": 2048, 00:13:41.626 "data_size": 63488 00:13:41.626 }, 00:13:41.626 { 00:13:41.626 "name": "BaseBdev4", 00:13:41.626 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:41.626 "is_configured": true, 00:13:41.626 "data_offset": 2048, 00:13:41.626 "data_size": 63488 00:13:41.626 } 00:13:41.626 ] 00:13:41.626 }' 00:13:41.626 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.626 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.626 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.626 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.626 14:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.998 "name": "raid_bdev1", 00:13:42.998 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:42.998 "strip_size_kb": 64, 00:13:42.998 "state": "online", 00:13:42.998 "raid_level": "raid5f", 00:13:42.998 "superblock": true, 00:13:42.998 "num_base_bdevs": 4, 00:13:42.998 "num_base_bdevs_discovered": 4, 00:13:42.998 "num_base_bdevs_operational": 4, 00:13:42.998 "process": { 00:13:42.998 "type": "rebuild", 00:13:42.998 "target": "spare", 00:13:42.998 "progress": { 00:13:42.998 "blocks": 82560, 00:13:42.998 "percent": 43 00:13:42.998 } 00:13:42.998 }, 00:13:42.998 "base_bdevs_list": [ 00:13:42.998 { 00:13:42.998 "name": "spare", 00:13:42.998 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:42.998 "is_configured": true, 00:13:42.998 "data_offset": 2048, 00:13:42.998 "data_size": 63488 00:13:42.998 }, 00:13:42.998 { 00:13:42.998 "name": "BaseBdev2", 00:13:42.998 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:42.998 "is_configured": true, 00:13:42.998 "data_offset": 2048, 00:13:42.998 "data_size": 63488 00:13:42.998 }, 00:13:42.998 { 00:13:42.998 "name": "BaseBdev3", 00:13:42.998 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:42.998 "is_configured": true, 00:13:42.998 "data_offset": 2048, 00:13:42.998 "data_size": 63488 00:13:42.998 }, 00:13:42.998 { 00:13:42.998 "name": "BaseBdev4", 00:13:42.998 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:42.998 "is_configured": true, 00:13:42.998 "data_offset": 2048, 00:13:42.998 "data_size": 63488 00:13:42.998 } 00:13:42.998 ] 00:13:42.998 }' 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.998 14:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.933 "name": "raid_bdev1", 00:13:43.933 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:43.933 "strip_size_kb": 64, 00:13:43.933 "state": "online", 00:13:43.933 "raid_level": "raid5f", 00:13:43.933 "superblock": true, 00:13:43.933 "num_base_bdevs": 4, 00:13:43.933 "num_base_bdevs_discovered": 4, 00:13:43.933 "num_base_bdevs_operational": 4, 00:13:43.933 "process": { 00:13:43.933 "type": "rebuild", 00:13:43.933 "target": "spare", 00:13:43.933 "progress": { 00:13:43.933 "blocks": 103680, 00:13:43.933 "percent": 54 00:13:43.933 } 00:13:43.933 }, 00:13:43.933 "base_bdevs_list": [ 00:13:43.933 { 00:13:43.933 "name": "spare", 00:13:43.933 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:43.933 "is_configured": true, 00:13:43.933 "data_offset": 2048, 00:13:43.933 "data_size": 63488 00:13:43.933 }, 00:13:43.933 { 00:13:43.933 "name": "BaseBdev2", 00:13:43.933 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:43.933 "is_configured": true, 00:13:43.933 "data_offset": 2048, 00:13:43.933 "data_size": 63488 00:13:43.933 }, 00:13:43.933 { 00:13:43.933 "name": "BaseBdev3", 00:13:43.933 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:43.933 "is_configured": true, 00:13:43.933 "data_offset": 2048, 00:13:43.933 "data_size": 63488 00:13:43.933 }, 00:13:43.933 { 00:13:43.933 "name": "BaseBdev4", 00:13:43.933 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:43.933 "is_configured": true, 00:13:43.933 "data_offset": 2048, 00:13:43.933 "data_size": 63488 00:13:43.933 } 00:13:43.933 ] 00:13:43.933 }' 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.933 14:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.926 "name": "raid_bdev1", 00:13:44.926 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:44.926 "strip_size_kb": 64, 00:13:44.926 "state": "online", 00:13:44.926 "raid_level": "raid5f", 00:13:44.926 "superblock": true, 00:13:44.926 "num_base_bdevs": 4, 00:13:44.926 "num_base_bdevs_discovered": 4, 00:13:44.926 "num_base_bdevs_operational": 4, 00:13:44.926 "process": { 00:13:44.926 "type": "rebuild", 00:13:44.926 "target": "spare", 00:13:44.926 "progress": { 00:13:44.926 "blocks": 124800, 00:13:44.926 "percent": 65 00:13:44.926 } 00:13:44.926 }, 00:13:44.926 "base_bdevs_list": [ 00:13:44.926 { 00:13:44.926 "name": "spare", 00:13:44.926 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:44.926 "is_configured": true, 00:13:44.926 "data_offset": 2048, 00:13:44.926 "data_size": 63488 00:13:44.926 }, 00:13:44.926 { 00:13:44.926 "name": "BaseBdev2", 00:13:44.926 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:44.926 "is_configured": true, 00:13:44.926 "data_offset": 2048, 00:13:44.926 "data_size": 63488 00:13:44.926 }, 00:13:44.926 { 00:13:44.926 "name": "BaseBdev3", 00:13:44.926 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:44.926 "is_configured": true, 00:13:44.926 "data_offset": 2048, 00:13:44.926 "data_size": 63488 00:13:44.926 }, 00:13:44.926 { 00:13:44.926 "name": "BaseBdev4", 00:13:44.926 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:44.926 "is_configured": true, 00:13:44.926 "data_offset": 2048, 00:13:44.926 "data_size": 63488 00:13:44.926 } 00:13:44.926 ] 00:13:44.926 }' 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.926 14:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:46.299 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.299 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.299 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.299 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.299 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.299 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.300 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.300 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.300 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.300 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.300 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.300 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.300 "name": "raid_bdev1", 00:13:46.300 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:46.300 "strip_size_kb": 64, 00:13:46.300 "state": "online", 00:13:46.300 "raid_level": "raid5f", 00:13:46.300 "superblock": true, 00:13:46.300 "num_base_bdevs": 4, 00:13:46.300 "num_base_bdevs_discovered": 4, 00:13:46.300 "num_base_bdevs_operational": 4, 00:13:46.300 "process": { 00:13:46.300 "type": "rebuild", 00:13:46.300 "target": "spare", 00:13:46.300 "progress": { 00:13:46.300 "blocks": 145920, 00:13:46.300 "percent": 76 00:13:46.300 } 00:13:46.300 }, 00:13:46.300 "base_bdevs_list": [ 00:13:46.300 { 00:13:46.300 "name": "spare", 00:13:46.300 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:46.300 "is_configured": true, 00:13:46.300 "data_offset": 2048, 00:13:46.300 "data_size": 63488 00:13:46.300 }, 00:13:46.300 { 00:13:46.300 "name": "BaseBdev2", 00:13:46.300 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:46.300 "is_configured": true, 00:13:46.300 "data_offset": 2048, 00:13:46.300 "data_size": 63488 00:13:46.300 }, 00:13:46.300 { 00:13:46.300 "name": "BaseBdev3", 00:13:46.300 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:46.300 "is_configured": true, 00:13:46.300 "data_offset": 2048, 00:13:46.300 "data_size": 63488 00:13:46.300 }, 00:13:46.300 { 00:13:46.300 "name": "BaseBdev4", 00:13:46.300 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:46.300 "is_configured": true, 00:13:46.300 "data_offset": 2048, 00:13:46.300 "data_size": 63488 00:13:46.300 } 00:13:46.300 ] 00:13:46.300 }' 00:13:46.300 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.300 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.300 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.300 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.300 14:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.232 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.232 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.232 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.232 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.232 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.232 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.232 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.233 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.233 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.233 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.233 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.233 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.233 "name": "raid_bdev1", 00:13:47.233 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:47.233 "strip_size_kb": 64, 00:13:47.233 "state": "online", 00:13:47.233 "raid_level": "raid5f", 00:13:47.233 "superblock": true, 00:13:47.233 "num_base_bdevs": 4, 00:13:47.233 "num_base_bdevs_discovered": 4, 00:13:47.233 "num_base_bdevs_operational": 4, 00:13:47.233 "process": { 00:13:47.233 "type": "rebuild", 00:13:47.233 "target": "spare", 00:13:47.233 "progress": { 00:13:47.233 "blocks": 167040, 00:13:47.233 "percent": 87 00:13:47.233 } 00:13:47.233 }, 00:13:47.233 "base_bdevs_list": [ 00:13:47.233 { 00:13:47.233 "name": "spare", 00:13:47.233 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:47.233 "is_configured": true, 00:13:47.233 "data_offset": 2048, 00:13:47.233 "data_size": 63488 00:13:47.233 }, 00:13:47.233 { 00:13:47.233 "name": "BaseBdev2", 00:13:47.233 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:47.233 "is_configured": true, 00:13:47.233 "data_offset": 2048, 00:13:47.233 "data_size": 63488 00:13:47.233 }, 00:13:47.233 { 00:13:47.233 "name": "BaseBdev3", 00:13:47.233 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:47.233 "is_configured": true, 00:13:47.233 "data_offset": 2048, 00:13:47.233 "data_size": 63488 00:13:47.233 }, 00:13:47.233 { 00:13:47.233 "name": "BaseBdev4", 00:13:47.233 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:47.233 "is_configured": true, 00:13:47.233 "data_offset": 2048, 00:13:47.233 "data_size": 63488 00:13:47.233 } 00:13:47.233 ] 00:13:47.233 }' 00:13:47.233 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.233 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.233 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.233 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.233 14:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.167 "name": "raid_bdev1", 00:13:48.167 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:48.167 "strip_size_kb": 64, 00:13:48.167 "state": "online", 00:13:48.167 "raid_level": "raid5f", 00:13:48.167 "superblock": true, 00:13:48.167 "num_base_bdevs": 4, 00:13:48.167 "num_base_bdevs_discovered": 4, 00:13:48.167 "num_base_bdevs_operational": 4, 00:13:48.167 "process": { 00:13:48.167 "type": "rebuild", 00:13:48.167 "target": "spare", 00:13:48.167 "progress": { 00:13:48.167 "blocks": 188160, 00:13:48.167 "percent": 98 00:13:48.167 } 00:13:48.167 }, 00:13:48.167 "base_bdevs_list": [ 00:13:48.167 { 00:13:48.167 "name": "spare", 00:13:48.167 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:48.167 "is_configured": true, 00:13:48.167 "data_offset": 2048, 00:13:48.167 "data_size": 63488 00:13:48.167 }, 00:13:48.167 { 00:13:48.167 "name": "BaseBdev2", 00:13:48.167 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:48.167 "is_configured": true, 00:13:48.167 "data_offset": 2048, 00:13:48.167 "data_size": 63488 00:13:48.167 }, 00:13:48.167 { 00:13:48.167 "name": "BaseBdev3", 00:13:48.167 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:48.167 "is_configured": true, 00:13:48.167 "data_offset": 2048, 00:13:48.167 "data_size": 63488 00:13:48.167 }, 00:13:48.167 { 00:13:48.167 "name": "BaseBdev4", 00:13:48.167 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:48.167 "is_configured": true, 00:13:48.167 "data_offset": 2048, 00:13:48.167 "data_size": 63488 00:13:48.167 } 00:13:48.167 ] 00:13:48.167 }' 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.167 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.424 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.424 14:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.424 [2024-10-01 14:38:39.917502] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:48.424 [2024-10-01 14:38:39.917573] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:48.424 [2024-10-01 14:38:39.917698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.358 "name": "raid_bdev1", 00:13:49.358 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:49.358 "strip_size_kb": 64, 00:13:49.358 "state": "online", 00:13:49.358 "raid_level": "raid5f", 00:13:49.358 "superblock": true, 00:13:49.358 "num_base_bdevs": 4, 00:13:49.358 "num_base_bdevs_discovered": 4, 00:13:49.358 "num_base_bdevs_operational": 4, 00:13:49.358 "base_bdevs_list": [ 00:13:49.358 { 00:13:49.358 "name": "spare", 00:13:49.358 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:49.358 "is_configured": true, 00:13:49.358 "data_offset": 2048, 00:13:49.358 "data_size": 63488 00:13:49.358 }, 00:13:49.358 { 00:13:49.358 "name": "BaseBdev2", 00:13:49.358 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:49.358 "is_configured": true, 00:13:49.358 "data_offset": 2048, 00:13:49.358 "data_size": 63488 00:13:49.358 }, 00:13:49.358 { 00:13:49.358 "name": "BaseBdev3", 00:13:49.358 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:49.358 "is_configured": true, 00:13:49.358 "data_offset": 2048, 00:13:49.358 "data_size": 63488 00:13:49.358 }, 00:13:49.358 { 00:13:49.358 "name": "BaseBdev4", 00:13:49.358 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:49.358 "is_configured": true, 00:13:49.358 "data_offset": 2048, 00:13:49.358 "data_size": 63488 00:13:49.358 } 00:13:49.358 ] 00:13:49.358 }' 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:49.358 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:49.359 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.359 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.359 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.359 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.359 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.359 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.359 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.359 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.359 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.359 14:38:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.359 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.359 "name": "raid_bdev1", 00:13:49.359 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:49.359 "strip_size_kb": 64, 00:13:49.359 "state": "online", 00:13:49.359 "raid_level": "raid5f", 00:13:49.359 "superblock": true, 00:13:49.359 "num_base_bdevs": 4, 00:13:49.359 "num_base_bdevs_discovered": 4, 00:13:49.359 "num_base_bdevs_operational": 4, 00:13:49.359 "base_bdevs_list": [ 00:13:49.359 { 00:13:49.359 "name": "spare", 00:13:49.359 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:49.359 "is_configured": true, 00:13:49.359 "data_offset": 2048, 00:13:49.359 "data_size": 63488 00:13:49.359 }, 00:13:49.359 { 00:13:49.359 "name": "BaseBdev2", 00:13:49.359 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:49.359 "is_configured": true, 00:13:49.359 "data_offset": 2048, 00:13:49.359 "data_size": 63488 00:13:49.359 }, 00:13:49.359 { 00:13:49.359 "name": "BaseBdev3", 00:13:49.359 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:49.359 "is_configured": true, 00:13:49.359 "data_offset": 2048, 00:13:49.359 "data_size": 63488 00:13:49.359 }, 00:13:49.359 { 00:13:49.359 "name": "BaseBdev4", 00:13:49.359 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:49.359 "is_configured": true, 00:13:49.359 "data_offset": 2048, 00:13:49.359 "data_size": 63488 00:13:49.359 } 00:13:49.359 ] 00:13:49.359 }' 00:13:49.359 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.618 "name": "raid_bdev1", 00:13:49.618 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:49.618 "strip_size_kb": 64, 00:13:49.618 "state": "online", 00:13:49.618 "raid_level": "raid5f", 00:13:49.618 "superblock": true, 00:13:49.618 "num_base_bdevs": 4, 00:13:49.618 "num_base_bdevs_discovered": 4, 00:13:49.618 "num_base_bdevs_operational": 4, 00:13:49.618 "base_bdevs_list": [ 00:13:49.618 { 00:13:49.618 "name": "spare", 00:13:49.618 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:49.618 "is_configured": true, 00:13:49.618 "data_offset": 2048, 00:13:49.618 "data_size": 63488 00:13:49.618 }, 00:13:49.618 { 00:13:49.618 "name": "BaseBdev2", 00:13:49.618 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:49.618 "is_configured": true, 00:13:49.618 "data_offset": 2048, 00:13:49.618 "data_size": 63488 00:13:49.618 }, 00:13:49.618 { 00:13:49.618 "name": "BaseBdev3", 00:13:49.618 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:49.618 "is_configured": true, 00:13:49.618 "data_offset": 2048, 00:13:49.618 "data_size": 63488 00:13:49.618 }, 00:13:49.618 { 00:13:49.618 "name": "BaseBdev4", 00:13:49.618 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:49.618 "is_configured": true, 00:13:49.618 "data_offset": 2048, 00:13:49.618 "data_size": 63488 00:13:49.618 } 00:13:49.618 ] 00:13:49.618 }' 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.618 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.899 [2024-10-01 14:38:41.406597] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.899 [2024-10-01 14:38:41.406648] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.899 [2024-10-01 14:38:41.406738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.899 [2024-10-01 14:38:41.406864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.899 [2024-10-01 14:38:41.406874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:49.899 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:50.158 /dev/nbd0 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.158 1+0 records in 00:13:50.158 1+0 records out 00:13:50.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277754 s, 14.7 MB/s 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:50.158 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:50.416 /dev/nbd1 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.416 1+0 records in 00:13:50.416 1+0 records out 00:13:50.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287238 s, 14.3 MB/s 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.416 14:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:50.673 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:50.673 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:50.673 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:50.673 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.673 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.673 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:50.673 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:50.674 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.674 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.674 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.932 [2024-10-01 14:38:42.440205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:50.932 [2024-10-01 14:38:42.440364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.932 [2024-10-01 14:38:42.440389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:50.932 [2024-10-01 14:38:42.440398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.932 [2024-10-01 14:38:42.442252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.932 [2024-10-01 14:38:42.442283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:50.932 [2024-10-01 14:38:42.442358] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:50.932 [2024-10-01 14:38:42.442399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:50.932 [2024-10-01 14:38:42.442506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.932 [2024-10-01 14:38:42.442583] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:50.932 [2024-10-01 14:38:42.442643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:50.932 spare 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.932 [2024-10-01 14:38:42.542733] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:50.932 [2024-10-01 14:38:42.542927] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:50.932 [2024-10-01 14:38:42.543202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:13:50.932 [2024-10-01 14:38:42.547054] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:50.932 [2024-10-01 14:38:42.547072] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:50.932 [2024-10-01 14:38:42.547236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.932 "name": "raid_bdev1", 00:13:50.932 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:50.932 "strip_size_kb": 64, 00:13:50.932 "state": "online", 00:13:50.932 "raid_level": "raid5f", 00:13:50.932 "superblock": true, 00:13:50.932 "num_base_bdevs": 4, 00:13:50.932 "num_base_bdevs_discovered": 4, 00:13:50.932 "num_base_bdevs_operational": 4, 00:13:50.932 "base_bdevs_list": [ 00:13:50.932 { 00:13:50.932 "name": "spare", 00:13:50.932 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:50.932 "is_configured": true, 00:13:50.932 "data_offset": 2048, 00:13:50.932 "data_size": 63488 00:13:50.932 }, 00:13:50.932 { 00:13:50.932 "name": "BaseBdev2", 00:13:50.932 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:50.932 "is_configured": true, 00:13:50.932 "data_offset": 2048, 00:13:50.932 "data_size": 63488 00:13:50.932 }, 00:13:50.932 { 00:13:50.932 "name": "BaseBdev3", 00:13:50.932 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:50.932 "is_configured": true, 00:13:50.932 "data_offset": 2048, 00:13:50.932 "data_size": 63488 00:13:50.932 }, 00:13:50.932 { 00:13:50.932 "name": "BaseBdev4", 00:13:50.932 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:50.932 "is_configured": true, 00:13:50.932 "data_offset": 2048, 00:13:50.932 "data_size": 63488 00:13:50.932 } 00:13:50.932 ] 00:13:50.932 }' 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.932 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.498 "name": "raid_bdev1", 00:13:51.498 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:51.498 "strip_size_kb": 64, 00:13:51.498 "state": "online", 00:13:51.498 "raid_level": "raid5f", 00:13:51.498 "superblock": true, 00:13:51.498 "num_base_bdevs": 4, 00:13:51.498 "num_base_bdevs_discovered": 4, 00:13:51.498 "num_base_bdevs_operational": 4, 00:13:51.498 "base_bdevs_list": [ 00:13:51.498 { 00:13:51.498 "name": "spare", 00:13:51.498 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:51.498 "is_configured": true, 00:13:51.498 "data_offset": 2048, 00:13:51.498 "data_size": 63488 00:13:51.498 }, 00:13:51.498 { 00:13:51.498 "name": "BaseBdev2", 00:13:51.498 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:51.498 "is_configured": true, 00:13:51.498 "data_offset": 2048, 00:13:51.498 "data_size": 63488 00:13:51.498 }, 00:13:51.498 { 00:13:51.498 "name": "BaseBdev3", 00:13:51.498 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:51.498 "is_configured": true, 00:13:51.498 "data_offset": 2048, 00:13:51.498 "data_size": 63488 00:13:51.498 }, 00:13:51.498 { 00:13:51.498 "name": "BaseBdev4", 00:13:51.498 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:51.498 "is_configured": true, 00:13:51.498 "data_offset": 2048, 00:13:51.498 "data_size": 63488 00:13:51.498 } 00:13:51.498 ] 00:13:51.498 }' 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.498 14:38:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.498 [2024-10-01 14:38:43.019744] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.498 "name": "raid_bdev1", 00:13:51.498 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:51.498 "strip_size_kb": 64, 00:13:51.498 "state": "online", 00:13:51.498 "raid_level": "raid5f", 00:13:51.498 "superblock": true, 00:13:51.498 "num_base_bdevs": 4, 00:13:51.498 "num_base_bdevs_discovered": 3, 00:13:51.498 "num_base_bdevs_operational": 3, 00:13:51.498 "base_bdevs_list": [ 00:13:51.498 { 00:13:51.498 "name": null, 00:13:51.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.498 "is_configured": false, 00:13:51.498 "data_offset": 0, 00:13:51.498 "data_size": 63488 00:13:51.498 }, 00:13:51.498 { 00:13:51.498 "name": "BaseBdev2", 00:13:51.498 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:51.498 "is_configured": true, 00:13:51.498 "data_offset": 2048, 00:13:51.498 "data_size": 63488 00:13:51.498 }, 00:13:51.498 { 00:13:51.498 "name": "BaseBdev3", 00:13:51.498 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:51.498 "is_configured": true, 00:13:51.498 "data_offset": 2048, 00:13:51.498 "data_size": 63488 00:13:51.498 }, 00:13:51.498 { 00:13:51.498 "name": "BaseBdev4", 00:13:51.498 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:51.498 "is_configured": true, 00:13:51.498 "data_offset": 2048, 00:13:51.498 "data_size": 63488 00:13:51.498 } 00:13:51.498 ] 00:13:51.498 }' 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.498 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.756 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:51.756 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.756 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.756 [2024-10-01 14:38:43.343824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.756 [2024-10-01 14:38:43.343980] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:51.756 [2024-10-01 14:38:43.343994] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:51.756 [2024-10-01 14:38:43.344027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.756 [2024-10-01 14:38:43.351320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:13:51.756 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.756 14:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:51.756 [2024-10-01 14:38:43.356539] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:52.690 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.690 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.690 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.690 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.690 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.690 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.690 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.690 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.690 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.948 "name": "raid_bdev1", 00:13:52.948 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:52.948 "strip_size_kb": 64, 00:13:52.948 "state": "online", 00:13:52.948 "raid_level": "raid5f", 00:13:52.948 "superblock": true, 00:13:52.948 "num_base_bdevs": 4, 00:13:52.948 "num_base_bdevs_discovered": 4, 00:13:52.948 "num_base_bdevs_operational": 4, 00:13:52.948 "process": { 00:13:52.948 "type": "rebuild", 00:13:52.948 "target": "spare", 00:13:52.948 "progress": { 00:13:52.948 "blocks": 19200, 00:13:52.948 "percent": 10 00:13:52.948 } 00:13:52.948 }, 00:13:52.948 "base_bdevs_list": [ 00:13:52.948 { 00:13:52.948 "name": "spare", 00:13:52.948 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:52.948 "is_configured": true, 00:13:52.948 "data_offset": 2048, 00:13:52.948 "data_size": 63488 00:13:52.948 }, 00:13:52.948 { 00:13:52.948 "name": "BaseBdev2", 00:13:52.948 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:52.948 "is_configured": true, 00:13:52.948 "data_offset": 2048, 00:13:52.948 "data_size": 63488 00:13:52.948 }, 00:13:52.948 { 00:13:52.948 "name": "BaseBdev3", 00:13:52.948 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:52.948 "is_configured": true, 00:13:52.948 "data_offset": 2048, 00:13:52.948 "data_size": 63488 00:13:52.948 }, 00:13:52.948 { 00:13:52.948 "name": "BaseBdev4", 00:13:52.948 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:52.948 "is_configured": true, 00:13:52.948 "data_offset": 2048, 00:13:52.948 "data_size": 63488 00:13:52.948 } 00:13:52.948 ] 00:13:52.948 }' 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.948 [2024-10-01 14:38:44.461389] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.948 [2024-10-01 14:38:44.463991] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:52.948 [2024-10-01 14:38:44.464128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.948 [2024-10-01 14:38:44.464222] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.948 [2024-10-01 14:38:44.464246] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.948 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.949 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.949 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.949 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.949 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.949 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.949 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.949 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.949 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.949 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.949 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.949 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.949 "name": "raid_bdev1", 00:13:52.949 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:52.949 "strip_size_kb": 64, 00:13:52.949 "state": "online", 00:13:52.949 "raid_level": "raid5f", 00:13:52.949 "superblock": true, 00:13:52.949 "num_base_bdevs": 4, 00:13:52.949 "num_base_bdevs_discovered": 3, 00:13:52.949 "num_base_bdevs_operational": 3, 00:13:52.949 "base_bdevs_list": [ 00:13:52.949 { 00:13:52.949 "name": null, 00:13:52.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.949 "is_configured": false, 00:13:52.949 "data_offset": 0, 00:13:52.949 "data_size": 63488 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "name": "BaseBdev2", 00:13:52.949 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:52.949 "is_configured": true, 00:13:52.949 "data_offset": 2048, 00:13:52.949 "data_size": 63488 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "name": "BaseBdev3", 00:13:52.949 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:52.949 "is_configured": true, 00:13:52.949 "data_offset": 2048, 00:13:52.949 "data_size": 63488 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "name": "BaseBdev4", 00:13:52.949 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:52.949 "is_configured": true, 00:13:52.949 "data_offset": 2048, 00:13:52.949 "data_size": 63488 00:13:52.949 } 00:13:52.949 ] 00:13:52.949 }' 00:13:52.949 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.949 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.206 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:53.207 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.207 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.207 [2024-10-01 14:38:44.792836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:53.207 [2024-10-01 14:38:44.792906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.207 [2024-10-01 14:38:44.792942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:53.207 [2024-10-01 14:38:44.792957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.207 [2024-10-01 14:38:44.793504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.207 [2024-10-01 14:38:44.793543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:53.207 [2024-10-01 14:38:44.793648] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:53.207 [2024-10-01 14:38:44.793672] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:53.207 [2024-10-01 14:38:44.793685] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:53.207 [2024-10-01 14:38:44.793736] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.207 [2024-10-01 14:38:44.802437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:13:53.207 spare 00:13:53.207 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.207 14:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:53.207 [2024-10-01 14:38:44.807766] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:54.139 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.139 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.139 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.139 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.139 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.139 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.139 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.139 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.139 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.139 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.397 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.397 "name": "raid_bdev1", 00:13:54.397 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:54.397 "strip_size_kb": 64, 00:13:54.397 "state": "online", 00:13:54.397 "raid_level": "raid5f", 00:13:54.397 "superblock": true, 00:13:54.397 "num_base_bdevs": 4, 00:13:54.397 "num_base_bdevs_discovered": 4, 00:13:54.397 "num_base_bdevs_operational": 4, 00:13:54.397 "process": { 00:13:54.397 "type": "rebuild", 00:13:54.397 "target": "spare", 00:13:54.397 "progress": { 00:13:54.397 "blocks": 17280, 00:13:54.398 "percent": 9 00:13:54.398 } 00:13:54.398 }, 00:13:54.398 "base_bdevs_list": [ 00:13:54.398 { 00:13:54.398 "name": "spare", 00:13:54.398 "uuid": "c555704f-1667-5d79-b85a-ba935167119c", 00:13:54.398 "is_configured": true, 00:13:54.398 "data_offset": 2048, 00:13:54.398 "data_size": 63488 00:13:54.398 }, 00:13:54.398 { 00:13:54.398 "name": "BaseBdev2", 00:13:54.398 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:54.398 "is_configured": true, 00:13:54.398 "data_offset": 2048, 00:13:54.398 "data_size": 63488 00:13:54.398 }, 00:13:54.398 { 00:13:54.398 "name": "BaseBdev3", 00:13:54.398 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:54.398 "is_configured": true, 00:13:54.398 "data_offset": 2048, 00:13:54.398 "data_size": 63488 00:13:54.398 }, 00:13:54.398 { 00:13:54.398 "name": "BaseBdev4", 00:13:54.398 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:54.398 "is_configured": true, 00:13:54.398 "data_offset": 2048, 00:13:54.398 "data_size": 63488 00:13:54.398 } 00:13:54.398 ] 00:13:54.398 }' 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.398 [2024-10-01 14:38:45.900816] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.398 [2024-10-01 14:38:45.915337] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:54.398 [2024-10-01 14:38:45.915383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.398 [2024-10-01 14:38:45.915400] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.398 [2024-10-01 14:38:45.915406] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.398 "name": "raid_bdev1", 00:13:54.398 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:54.398 "strip_size_kb": 64, 00:13:54.398 "state": "online", 00:13:54.398 "raid_level": "raid5f", 00:13:54.398 "superblock": true, 00:13:54.398 "num_base_bdevs": 4, 00:13:54.398 "num_base_bdevs_discovered": 3, 00:13:54.398 "num_base_bdevs_operational": 3, 00:13:54.398 "base_bdevs_list": [ 00:13:54.398 { 00:13:54.398 "name": null, 00:13:54.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.398 "is_configured": false, 00:13:54.398 "data_offset": 0, 00:13:54.398 "data_size": 63488 00:13:54.398 }, 00:13:54.398 { 00:13:54.398 "name": "BaseBdev2", 00:13:54.398 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:54.398 "is_configured": true, 00:13:54.398 "data_offset": 2048, 00:13:54.398 "data_size": 63488 00:13:54.398 }, 00:13:54.398 { 00:13:54.398 "name": "BaseBdev3", 00:13:54.398 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:54.398 "is_configured": true, 00:13:54.398 "data_offset": 2048, 00:13:54.398 "data_size": 63488 00:13:54.398 }, 00:13:54.398 { 00:13:54.398 "name": "BaseBdev4", 00:13:54.398 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:54.398 "is_configured": true, 00:13:54.398 "data_offset": 2048, 00:13:54.398 "data_size": 63488 00:13:54.398 } 00:13:54.398 ] 00:13:54.398 }' 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.398 14:38:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.657 "name": "raid_bdev1", 00:13:54.657 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:54.657 "strip_size_kb": 64, 00:13:54.657 "state": "online", 00:13:54.657 "raid_level": "raid5f", 00:13:54.657 "superblock": true, 00:13:54.657 "num_base_bdevs": 4, 00:13:54.657 "num_base_bdevs_discovered": 3, 00:13:54.657 "num_base_bdevs_operational": 3, 00:13:54.657 "base_bdevs_list": [ 00:13:54.657 { 00:13:54.657 "name": null, 00:13:54.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.657 "is_configured": false, 00:13:54.657 "data_offset": 0, 00:13:54.657 "data_size": 63488 00:13:54.657 }, 00:13:54.657 { 00:13:54.657 "name": "BaseBdev2", 00:13:54.657 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:54.657 "is_configured": true, 00:13:54.657 "data_offset": 2048, 00:13:54.657 "data_size": 63488 00:13:54.657 }, 00:13:54.657 { 00:13:54.657 "name": "BaseBdev3", 00:13:54.657 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:54.657 "is_configured": true, 00:13:54.657 "data_offset": 2048, 00:13:54.657 "data_size": 63488 00:13:54.657 }, 00:13:54.657 { 00:13:54.657 "name": "BaseBdev4", 00:13:54.657 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:54.657 "is_configured": true, 00:13:54.657 "data_offset": 2048, 00:13:54.657 "data_size": 63488 00:13:54.657 } 00:13:54.657 ] 00:13:54.657 }' 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.657 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.915 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.916 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:54.916 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.916 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.916 [2024-10-01 14:38:46.351662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:54.916 [2024-10-01 14:38:46.351820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.916 [2024-10-01 14:38:46.351844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:54.916 [2024-10-01 14:38:46.351853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.916 [2024-10-01 14:38:46.352219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.916 [2024-10-01 14:38:46.352232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:54.916 [2024-10-01 14:38:46.352295] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:54.916 [2024-10-01 14:38:46.352307] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:54.916 [2024-10-01 14:38:46.352315] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:54.916 [2024-10-01 14:38:46.352323] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:54.916 BaseBdev1 00:13:54.916 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.916 14:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.846 "name": "raid_bdev1", 00:13:55.846 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:55.846 "strip_size_kb": 64, 00:13:55.846 "state": "online", 00:13:55.846 "raid_level": "raid5f", 00:13:55.846 "superblock": true, 00:13:55.846 "num_base_bdevs": 4, 00:13:55.846 "num_base_bdevs_discovered": 3, 00:13:55.846 "num_base_bdevs_operational": 3, 00:13:55.846 "base_bdevs_list": [ 00:13:55.846 { 00:13:55.846 "name": null, 00:13:55.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.846 "is_configured": false, 00:13:55.846 "data_offset": 0, 00:13:55.846 "data_size": 63488 00:13:55.846 }, 00:13:55.846 { 00:13:55.846 "name": "BaseBdev2", 00:13:55.846 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:55.846 "is_configured": true, 00:13:55.846 "data_offset": 2048, 00:13:55.846 "data_size": 63488 00:13:55.846 }, 00:13:55.846 { 00:13:55.846 "name": "BaseBdev3", 00:13:55.846 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:55.846 "is_configured": true, 00:13:55.846 "data_offset": 2048, 00:13:55.846 "data_size": 63488 00:13:55.846 }, 00:13:55.846 { 00:13:55.846 "name": "BaseBdev4", 00:13:55.846 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:55.846 "is_configured": true, 00:13:55.846 "data_offset": 2048, 00:13:55.846 "data_size": 63488 00:13:55.846 } 00:13:55.846 ] 00:13:55.846 }' 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.846 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.104 "name": "raid_bdev1", 00:13:56.104 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:56.104 "strip_size_kb": 64, 00:13:56.104 "state": "online", 00:13:56.104 "raid_level": "raid5f", 00:13:56.104 "superblock": true, 00:13:56.104 "num_base_bdevs": 4, 00:13:56.104 "num_base_bdevs_discovered": 3, 00:13:56.104 "num_base_bdevs_operational": 3, 00:13:56.104 "base_bdevs_list": [ 00:13:56.104 { 00:13:56.104 "name": null, 00:13:56.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.104 "is_configured": false, 00:13:56.104 "data_offset": 0, 00:13:56.104 "data_size": 63488 00:13:56.104 }, 00:13:56.104 { 00:13:56.104 "name": "BaseBdev2", 00:13:56.104 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:56.104 "is_configured": true, 00:13:56.104 "data_offset": 2048, 00:13:56.104 "data_size": 63488 00:13:56.104 }, 00:13:56.104 { 00:13:56.104 "name": "BaseBdev3", 00:13:56.104 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:56.104 "is_configured": true, 00:13:56.104 "data_offset": 2048, 00:13:56.104 "data_size": 63488 00:13:56.104 }, 00:13:56.104 { 00:13:56.104 "name": "BaseBdev4", 00:13:56.104 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:56.104 "is_configured": true, 00:13:56.104 "data_offset": 2048, 00:13:56.104 "data_size": 63488 00:13:56.104 } 00:13:56.104 ] 00:13:56.104 }' 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.104 [2024-10-01 14:38:47.775998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.104 [2024-10-01 14:38:47.776125] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:56.104 [2024-10-01 14:38:47.776141] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:56.104 request: 00:13:56.104 { 00:13:56.104 "base_bdev": "BaseBdev1", 00:13:56.104 "raid_bdev": "raid_bdev1", 00:13:56.104 "method": "bdev_raid_add_base_bdev", 00:13:56.104 "req_id": 1 00:13:56.104 } 00:13:56.104 Got JSON-RPC error response 00:13:56.104 response: 00:13:56.104 { 00:13:56.104 "code": -22, 00:13:56.104 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:56.104 } 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:56.104 14:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.477 "name": "raid_bdev1", 00:13:57.477 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:57.477 "strip_size_kb": 64, 00:13:57.477 "state": "online", 00:13:57.477 "raid_level": "raid5f", 00:13:57.477 "superblock": true, 00:13:57.477 "num_base_bdevs": 4, 00:13:57.477 "num_base_bdevs_discovered": 3, 00:13:57.477 "num_base_bdevs_operational": 3, 00:13:57.477 "base_bdevs_list": [ 00:13:57.477 { 00:13:57.477 "name": null, 00:13:57.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.477 "is_configured": false, 00:13:57.477 "data_offset": 0, 00:13:57.477 "data_size": 63488 00:13:57.477 }, 00:13:57.477 { 00:13:57.477 "name": "BaseBdev2", 00:13:57.477 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:57.477 "is_configured": true, 00:13:57.477 "data_offset": 2048, 00:13:57.477 "data_size": 63488 00:13:57.477 }, 00:13:57.477 { 00:13:57.477 "name": "BaseBdev3", 00:13:57.477 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:57.477 "is_configured": true, 00:13:57.477 "data_offset": 2048, 00:13:57.477 "data_size": 63488 00:13:57.477 }, 00:13:57.477 { 00:13:57.477 "name": "BaseBdev4", 00:13:57.477 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:57.477 "is_configured": true, 00:13:57.477 "data_offset": 2048, 00:13:57.477 "data_size": 63488 00:13:57.477 } 00:13:57.477 ] 00:13:57.477 }' 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.477 14:38:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.477 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:57.477 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.477 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:57.477 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:57.477 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.477 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.477 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.477 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.477 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.477 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.477 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.477 "name": "raid_bdev1", 00:13:57.477 "uuid": "912dcbd7-93f7-4e02-9037-1ee89e87a356", 00:13:57.477 "strip_size_kb": 64, 00:13:57.477 "state": "online", 00:13:57.477 "raid_level": "raid5f", 00:13:57.477 "superblock": true, 00:13:57.477 "num_base_bdevs": 4, 00:13:57.477 "num_base_bdevs_discovered": 3, 00:13:57.477 "num_base_bdevs_operational": 3, 00:13:57.477 "base_bdevs_list": [ 00:13:57.477 { 00:13:57.477 "name": null, 00:13:57.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.478 "is_configured": false, 00:13:57.478 "data_offset": 0, 00:13:57.478 "data_size": 63488 00:13:57.478 }, 00:13:57.478 { 00:13:57.478 "name": "BaseBdev2", 00:13:57.478 "uuid": "ac245b80-fa54-53f7-988c-f7f972822df3", 00:13:57.478 "is_configured": true, 00:13:57.478 "data_offset": 2048, 00:13:57.478 "data_size": 63488 00:13:57.478 }, 00:13:57.478 { 00:13:57.478 "name": "BaseBdev3", 00:13:57.478 "uuid": "417e20f4-f44f-5b7e-8040-e2b86bea31ee", 00:13:57.478 "is_configured": true, 00:13:57.478 "data_offset": 2048, 00:13:57.478 "data_size": 63488 00:13:57.478 }, 00:13:57.478 { 00:13:57.478 "name": "BaseBdev4", 00:13:57.478 "uuid": "ce2b40ba-1e74-55fa-b658-82ae2a4d1cd7", 00:13:57.478 "is_configured": true, 00:13:57.478 "data_offset": 2048, 00:13:57.478 "data_size": 63488 00:13:57.478 } 00:13:57.478 ] 00:13:57.478 }' 00:13:57.478 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82960 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82960 ']' 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 82960 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82960 00:13:57.735 killing process with pid 82960 00:13:57.735 Received shutdown signal, test time was about 60.000000 seconds 00:13:57.735 00:13:57.735 Latency(us) 00:13:57.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.735 =================================================================================================================== 00:13:57.735 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82960' 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 82960 00:13:57.735 [2024-10-01 14:38:49.224786] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.735 14:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 82960 00:13:57.735 [2024-10-01 14:38:49.224884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.735 [2024-10-01 14:38:49.224948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.735 [2024-10-01 14:38:49.224959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:57.993 [2024-10-01 14:38:49.472260] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.560 14:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:58.560 00:13:58.560 real 0m24.573s 00:13:58.560 user 0m29.723s 00:13:58.560 sys 0m2.304s 00:13:58.560 14:38:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.560 ************************************ 00:13:58.560 14:38:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.560 END TEST raid5f_rebuild_test_sb 00:13:58.560 ************************************ 00:13:58.560 14:38:50 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:13:58.560 14:38:50 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:13:58.560 14:38:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:58.560 14:38:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.560 14:38:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.560 ************************************ 00:13:58.560 START TEST raid_state_function_test_sb_4k 00:13:58.560 ************************************ 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:58.560 Process raid pid: 83755 00:13:58.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=83755 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83755' 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 83755 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 83755 ']' 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:58.560 14:38:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:58.560 [2024-10-01 14:38:50.233421] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:13:58.560 [2024-10-01 14:38:50.233754] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.819 [2024-10-01 14:38:50.374248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.077 [2024-10-01 14:38:50.531924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.077 [2024-10-01 14:38:50.645829] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.077 [2024-10-01 14:38:50.645870] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:59.643 [2024-10-01 14:38:51.084500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:59.643 [2024-10-01 14:38:51.084541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:59.643 [2024-10-01 14:38:51.084550] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.643 [2024-10-01 14:38:51.084559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.643 "name": "Existed_Raid", 00:13:59.643 "uuid": "546da673-d343-4004-ab51-c0bea1bb5825", 00:13:59.643 "strip_size_kb": 0, 00:13:59.643 "state": "configuring", 00:13:59.643 "raid_level": "raid1", 00:13:59.643 "superblock": true, 00:13:59.643 "num_base_bdevs": 2, 00:13:59.643 "num_base_bdevs_discovered": 0, 00:13:59.643 "num_base_bdevs_operational": 2, 00:13:59.643 "base_bdevs_list": [ 00:13:59.643 { 00:13:59.643 "name": "BaseBdev1", 00:13:59.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.643 "is_configured": false, 00:13:59.643 "data_offset": 0, 00:13:59.643 "data_size": 0 00:13:59.643 }, 00:13:59.643 { 00:13:59.643 "name": "BaseBdev2", 00:13:59.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.643 "is_configured": false, 00:13:59.643 "data_offset": 0, 00:13:59.643 "data_size": 0 00:13:59.643 } 00:13:59.643 ] 00:13:59.643 }' 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.643 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:59.901 [2024-10-01 14:38:51.376499] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.901 [2024-10-01 14:38:51.376530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:59.901 [2024-10-01 14:38:51.384509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:59.901 [2024-10-01 14:38:51.384544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:59.901 [2024-10-01 14:38:51.384551] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.901 [2024-10-01 14:38:51.384560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:59.901 [2024-10-01 14:38:51.425821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.901 BaseBdev1 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:59.901 [ 00:13:59.901 { 00:13:59.901 "name": "BaseBdev1", 00:13:59.901 "aliases": [ 00:13:59.901 "bbbc149e-1176-48ac-829f-e3ae40a76c23" 00:13:59.901 ], 00:13:59.901 "product_name": "Malloc disk", 00:13:59.901 "block_size": 4096, 00:13:59.901 "num_blocks": 8192, 00:13:59.901 "uuid": "bbbc149e-1176-48ac-829f-e3ae40a76c23", 00:13:59.901 "assigned_rate_limits": { 00:13:59.901 "rw_ios_per_sec": 0, 00:13:59.901 "rw_mbytes_per_sec": 0, 00:13:59.901 "r_mbytes_per_sec": 0, 00:13:59.901 "w_mbytes_per_sec": 0 00:13:59.901 }, 00:13:59.901 "claimed": true, 00:13:59.901 "claim_type": "exclusive_write", 00:13:59.901 "zoned": false, 00:13:59.901 "supported_io_types": { 00:13:59.901 "read": true, 00:13:59.901 "write": true, 00:13:59.901 "unmap": true, 00:13:59.901 "flush": true, 00:13:59.901 "reset": true, 00:13:59.901 "nvme_admin": false, 00:13:59.901 "nvme_io": false, 00:13:59.901 "nvme_io_md": false, 00:13:59.901 "write_zeroes": true, 00:13:59.901 "zcopy": true, 00:13:59.901 "get_zone_info": false, 00:13:59.901 "zone_management": false, 00:13:59.901 "zone_append": false, 00:13:59.901 "compare": false, 00:13:59.901 "compare_and_write": false, 00:13:59.901 "abort": true, 00:13:59.901 "seek_hole": false, 00:13:59.901 "seek_data": false, 00:13:59.901 "copy": true, 00:13:59.901 "nvme_iov_md": false 00:13:59.901 }, 00:13:59.901 "memory_domains": [ 00:13:59.901 { 00:13:59.901 "dma_device_id": "system", 00:13:59.901 "dma_device_type": 1 00:13:59.901 }, 00:13:59.901 { 00:13:59.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.901 "dma_device_type": 2 00:13:59.901 } 00:13:59.901 ], 00:13:59.901 "driver_specific": {} 00:13:59.901 } 00:13:59.901 ] 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:13:59.901 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.902 "name": "Existed_Raid", 00:13:59.902 "uuid": "0a1eb97c-1772-43be-9652-ee82b4d9ccce", 00:13:59.902 "strip_size_kb": 0, 00:13:59.902 "state": "configuring", 00:13:59.902 "raid_level": "raid1", 00:13:59.902 "superblock": true, 00:13:59.902 "num_base_bdevs": 2, 00:13:59.902 "num_base_bdevs_discovered": 1, 00:13:59.902 "num_base_bdevs_operational": 2, 00:13:59.902 "base_bdevs_list": [ 00:13:59.902 { 00:13:59.902 "name": "BaseBdev1", 00:13:59.902 "uuid": "bbbc149e-1176-48ac-829f-e3ae40a76c23", 00:13:59.902 "is_configured": true, 00:13:59.902 "data_offset": 256, 00:13:59.902 "data_size": 7936 00:13:59.902 }, 00:13:59.902 { 00:13:59.902 "name": "BaseBdev2", 00:13:59.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.902 "is_configured": false, 00:13:59.902 "data_offset": 0, 00:13:59.902 "data_size": 0 00:13:59.902 } 00:13:59.902 ] 00:13:59.902 }' 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.902 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:00.159 [2024-10-01 14:38:51.753907] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:00.159 [2024-10-01 14:38:51.753947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:00.159 [2024-10-01 14:38:51.761934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.159 [2024-10-01 14:38:51.763490] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.159 [2024-10-01 14:38:51.763528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.159 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.160 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.160 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.160 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.160 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:00.160 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.160 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.160 "name": "Existed_Raid", 00:14:00.160 "uuid": "14c6433a-cb3d-41df-9406-5c1422955587", 00:14:00.160 "strip_size_kb": 0, 00:14:00.160 "state": "configuring", 00:14:00.160 "raid_level": "raid1", 00:14:00.160 "superblock": true, 00:14:00.160 "num_base_bdevs": 2, 00:14:00.160 "num_base_bdevs_discovered": 1, 00:14:00.160 "num_base_bdevs_operational": 2, 00:14:00.160 "base_bdevs_list": [ 00:14:00.160 { 00:14:00.160 "name": "BaseBdev1", 00:14:00.160 "uuid": "bbbc149e-1176-48ac-829f-e3ae40a76c23", 00:14:00.160 "is_configured": true, 00:14:00.160 "data_offset": 256, 00:14:00.160 "data_size": 7936 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "name": "BaseBdev2", 00:14:00.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.160 "is_configured": false, 00:14:00.160 "data_offset": 0, 00:14:00.160 "data_size": 0 00:14:00.160 } 00:14:00.160 ] 00:14:00.160 }' 00:14:00.160 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.160 14:38:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:00.418 [2024-10-01 14:38:52.076273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.418 [2024-10-01 14:38:52.076453] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:00.418 [2024-10-01 14:38:52.076466] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:00.418 [2024-10-01 14:38:52.076673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:00.418 BaseBdev2 00:14:00.418 [2024-10-01 14:38:52.076799] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:00.418 [2024-10-01 14:38:52.076809] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:00.418 [2024-10-01 14:38:52.076909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:00.418 [ 00:14:00.418 { 00:14:00.418 "name": "BaseBdev2", 00:14:00.418 "aliases": [ 00:14:00.418 "eb5ee831-d659-45f5-8b58-609d035a4de8" 00:14:00.418 ], 00:14:00.418 "product_name": "Malloc disk", 00:14:00.418 "block_size": 4096, 00:14:00.418 "num_blocks": 8192, 00:14:00.418 "uuid": "eb5ee831-d659-45f5-8b58-609d035a4de8", 00:14:00.418 "assigned_rate_limits": { 00:14:00.418 "rw_ios_per_sec": 0, 00:14:00.418 "rw_mbytes_per_sec": 0, 00:14:00.418 "r_mbytes_per_sec": 0, 00:14:00.418 "w_mbytes_per_sec": 0 00:14:00.418 }, 00:14:00.418 "claimed": true, 00:14:00.418 "claim_type": "exclusive_write", 00:14:00.418 "zoned": false, 00:14:00.418 "supported_io_types": { 00:14:00.418 "read": true, 00:14:00.418 "write": true, 00:14:00.418 "unmap": true, 00:14:00.418 "flush": true, 00:14:00.418 "reset": true, 00:14:00.418 "nvme_admin": false, 00:14:00.418 "nvme_io": false, 00:14:00.418 "nvme_io_md": false, 00:14:00.418 "write_zeroes": true, 00:14:00.418 "zcopy": true, 00:14:00.418 "get_zone_info": false, 00:14:00.418 "zone_management": false, 00:14:00.418 "zone_append": false, 00:14:00.418 "compare": false, 00:14:00.418 "compare_and_write": false, 00:14:00.418 "abort": true, 00:14:00.418 "seek_hole": false, 00:14:00.418 "seek_data": false, 00:14:00.418 "copy": true, 00:14:00.418 "nvme_iov_md": false 00:14:00.418 }, 00:14:00.418 "memory_domains": [ 00:14:00.418 { 00:14:00.418 "dma_device_id": "system", 00:14:00.418 "dma_device_type": 1 00:14:00.418 }, 00:14:00.418 { 00:14:00.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.418 "dma_device_type": 2 00:14:00.418 } 00:14:00.418 ], 00:14:00.418 "driver_specific": {} 00:14:00.418 } 00:14:00.418 ] 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.418 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.419 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.676 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.676 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.676 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.676 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:00.676 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.676 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.676 "name": "Existed_Raid", 00:14:00.676 "uuid": "14c6433a-cb3d-41df-9406-5c1422955587", 00:14:00.676 "strip_size_kb": 0, 00:14:00.676 "state": "online", 00:14:00.676 "raid_level": "raid1", 00:14:00.676 "superblock": true, 00:14:00.676 "num_base_bdevs": 2, 00:14:00.676 "num_base_bdevs_discovered": 2, 00:14:00.676 "num_base_bdevs_operational": 2, 00:14:00.676 "base_bdevs_list": [ 00:14:00.676 { 00:14:00.676 "name": "BaseBdev1", 00:14:00.676 "uuid": "bbbc149e-1176-48ac-829f-e3ae40a76c23", 00:14:00.676 "is_configured": true, 00:14:00.676 "data_offset": 256, 00:14:00.676 "data_size": 7936 00:14:00.676 }, 00:14:00.676 { 00:14:00.676 "name": "BaseBdev2", 00:14:00.676 "uuid": "eb5ee831-d659-45f5-8b58-609d035a4de8", 00:14:00.676 "is_configured": true, 00:14:00.676 "data_offset": 256, 00:14:00.676 "data_size": 7936 00:14:00.676 } 00:14:00.676 ] 00:14:00.676 }' 00:14:00.676 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.676 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:01.008 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:01.008 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:01.008 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:01.008 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:01.008 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:01.008 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:01.008 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:01.008 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:01.008 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:01.009 [2024-10-01 14:38:52.376630] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:01.009 "name": "Existed_Raid", 00:14:01.009 "aliases": [ 00:14:01.009 "14c6433a-cb3d-41df-9406-5c1422955587" 00:14:01.009 ], 00:14:01.009 "product_name": "Raid Volume", 00:14:01.009 "block_size": 4096, 00:14:01.009 "num_blocks": 7936, 00:14:01.009 "uuid": "14c6433a-cb3d-41df-9406-5c1422955587", 00:14:01.009 "assigned_rate_limits": { 00:14:01.009 "rw_ios_per_sec": 0, 00:14:01.009 "rw_mbytes_per_sec": 0, 00:14:01.009 "r_mbytes_per_sec": 0, 00:14:01.009 "w_mbytes_per_sec": 0 00:14:01.009 }, 00:14:01.009 "claimed": false, 00:14:01.009 "zoned": false, 00:14:01.009 "supported_io_types": { 00:14:01.009 "read": true, 00:14:01.009 "write": true, 00:14:01.009 "unmap": false, 00:14:01.009 "flush": false, 00:14:01.009 "reset": true, 00:14:01.009 "nvme_admin": false, 00:14:01.009 "nvme_io": false, 00:14:01.009 "nvme_io_md": false, 00:14:01.009 "write_zeroes": true, 00:14:01.009 "zcopy": false, 00:14:01.009 "get_zone_info": false, 00:14:01.009 "zone_management": false, 00:14:01.009 "zone_append": false, 00:14:01.009 "compare": false, 00:14:01.009 "compare_and_write": false, 00:14:01.009 "abort": false, 00:14:01.009 "seek_hole": false, 00:14:01.009 "seek_data": false, 00:14:01.009 "copy": false, 00:14:01.009 "nvme_iov_md": false 00:14:01.009 }, 00:14:01.009 "memory_domains": [ 00:14:01.009 { 00:14:01.009 "dma_device_id": "system", 00:14:01.009 "dma_device_type": 1 00:14:01.009 }, 00:14:01.009 { 00:14:01.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.009 "dma_device_type": 2 00:14:01.009 }, 00:14:01.009 { 00:14:01.009 "dma_device_id": "system", 00:14:01.009 "dma_device_type": 1 00:14:01.009 }, 00:14:01.009 { 00:14:01.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.009 "dma_device_type": 2 00:14:01.009 } 00:14:01.009 ], 00:14:01.009 "driver_specific": { 00:14:01.009 "raid": { 00:14:01.009 "uuid": "14c6433a-cb3d-41df-9406-5c1422955587", 00:14:01.009 "strip_size_kb": 0, 00:14:01.009 "state": "online", 00:14:01.009 "raid_level": "raid1", 00:14:01.009 "superblock": true, 00:14:01.009 "num_base_bdevs": 2, 00:14:01.009 "num_base_bdevs_discovered": 2, 00:14:01.009 "num_base_bdevs_operational": 2, 00:14:01.009 "base_bdevs_list": [ 00:14:01.009 { 00:14:01.009 "name": "BaseBdev1", 00:14:01.009 "uuid": "bbbc149e-1176-48ac-829f-e3ae40a76c23", 00:14:01.009 "is_configured": true, 00:14:01.009 "data_offset": 256, 00:14:01.009 "data_size": 7936 00:14:01.009 }, 00:14:01.009 { 00:14:01.009 "name": "BaseBdev2", 00:14:01.009 "uuid": "eb5ee831-d659-45f5-8b58-609d035a4de8", 00:14:01.009 "is_configured": true, 00:14:01.009 "data_offset": 256, 00:14:01.009 "data_size": 7936 00:14:01.009 } 00:14:01.009 ] 00:14:01.009 } 00:14:01.009 } 00:14:01.009 }' 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:01.009 BaseBdev2' 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:01.009 [2024-10-01 14:38:52.524439] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.009 "name": "Existed_Raid", 00:14:01.009 "uuid": "14c6433a-cb3d-41df-9406-5c1422955587", 00:14:01.009 "strip_size_kb": 0, 00:14:01.009 "state": "online", 00:14:01.009 "raid_level": "raid1", 00:14:01.009 "superblock": true, 00:14:01.009 "num_base_bdevs": 2, 00:14:01.009 "num_base_bdevs_discovered": 1, 00:14:01.009 "num_base_bdevs_operational": 1, 00:14:01.009 "base_bdevs_list": [ 00:14:01.009 { 00:14:01.009 "name": null, 00:14:01.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.009 "is_configured": false, 00:14:01.009 "data_offset": 0, 00:14:01.009 "data_size": 7936 00:14:01.009 }, 00:14:01.009 { 00:14:01.009 "name": "BaseBdev2", 00:14:01.009 "uuid": "eb5ee831-d659-45f5-8b58-609d035a4de8", 00:14:01.009 "is_configured": true, 00:14:01.009 "data_offset": 256, 00:14:01.009 "data_size": 7936 00:14:01.009 } 00:14:01.009 ] 00:14:01.009 }' 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.009 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:01.267 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:01.267 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:01.267 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:01.267 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.267 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.267 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:01.267 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.267 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:01.267 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:01.267 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:01.267 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.267 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:01.267 [2024-10-01 14:38:52.910970] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:01.267 [2024-10-01 14:38:52.911052] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.525 [2024-10-01 14:38:52.957610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.525 [2024-10-01 14:38:52.957647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.525 [2024-10-01 14:38:52.957656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 83755 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 83755 ']' 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 83755 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:01.525 14:38:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83755 00:14:01.525 killing process with pid 83755 00:14:01.525 14:38:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:01.525 14:38:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:01.525 14:38:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83755' 00:14:01.525 14:38:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 83755 00:14:01.525 [2024-10-01 14:38:53.016223] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:01.525 14:38:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 83755 00:14:01.525 [2024-10-01 14:38:53.024651] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:02.091 14:38:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:14:02.091 00:14:02.091 real 0m3.514s 00:14:02.091 user 0m5.067s 00:14:02.091 sys 0m0.540s 00:14:02.091 14:38:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.091 ************************************ 00:14:02.091 END TEST raid_state_function_test_sb_4k 00:14:02.091 ************************************ 00:14:02.091 14:38:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:02.091 14:38:53 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:14:02.091 14:38:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:02.091 14:38:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.091 14:38:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:02.091 ************************************ 00:14:02.091 START TEST raid_superblock_test_4k 00:14:02.091 ************************************ 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:02.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=83985 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 83985 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 83985 ']' 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:02.091 14:38:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:02.348 [2024-10-01 14:38:53.791229] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:14:02.348 [2024-10-01 14:38:53.791421] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83985 ] 00:14:02.348 [2024-10-01 14:38:53.933488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.605 [2024-10-01 14:38:54.090946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.605 [2024-10-01 14:38:54.201136] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.605 [2024-10-01 14:38:54.201279] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.169 malloc1 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.169 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.169 [2024-10-01 14:38:54.680166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:03.170 [2024-10-01 14:38:54.680220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.170 [2024-10-01 14:38:54.680236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:03.170 [2024-10-01 14:38:54.680246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.170 [2024-10-01 14:38:54.682046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.170 [2024-10-01 14:38:54.682076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:03.170 pt1 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.170 malloc2 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.170 [2024-10-01 14:38:54.737691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:03.170 [2024-10-01 14:38:54.737750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.170 [2024-10-01 14:38:54.737769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:03.170 [2024-10-01 14:38:54.737777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.170 [2024-10-01 14:38:54.739571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.170 [2024-10-01 14:38:54.739602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:03.170 pt2 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.170 [2024-10-01 14:38:54.745747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:03.170 [2024-10-01 14:38:54.747297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:03.170 [2024-10-01 14:38:54.747427] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:03.170 [2024-10-01 14:38:54.747438] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:03.170 [2024-10-01 14:38:54.747644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:03.170 [2024-10-01 14:38:54.747877] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:03.170 [2024-10-01 14:38:54.747940] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:03.170 [2024-10-01 14:38:54.748112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.170 "name": "raid_bdev1", 00:14:03.170 "uuid": "1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55", 00:14:03.170 "strip_size_kb": 0, 00:14:03.170 "state": "online", 00:14:03.170 "raid_level": "raid1", 00:14:03.170 "superblock": true, 00:14:03.170 "num_base_bdevs": 2, 00:14:03.170 "num_base_bdevs_discovered": 2, 00:14:03.170 "num_base_bdevs_operational": 2, 00:14:03.170 "base_bdevs_list": [ 00:14:03.170 { 00:14:03.170 "name": "pt1", 00:14:03.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:03.170 "is_configured": true, 00:14:03.170 "data_offset": 256, 00:14:03.170 "data_size": 7936 00:14:03.170 }, 00:14:03.170 { 00:14:03.170 "name": "pt2", 00:14:03.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.170 "is_configured": true, 00:14:03.170 "data_offset": 256, 00:14:03.170 "data_size": 7936 00:14:03.170 } 00:14:03.170 ] 00:14:03.170 }' 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.170 14:38:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.427 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:03.427 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:03.427 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:03.427 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:03.427 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:03.427 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:03.427 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.427 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:03.427 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.427 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.427 [2024-10-01 14:38:55.070051] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.427 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.427 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:03.427 "name": "raid_bdev1", 00:14:03.427 "aliases": [ 00:14:03.427 "1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55" 00:14:03.427 ], 00:14:03.427 "product_name": "Raid Volume", 00:14:03.427 "block_size": 4096, 00:14:03.427 "num_blocks": 7936, 00:14:03.427 "uuid": "1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55", 00:14:03.427 "assigned_rate_limits": { 00:14:03.427 "rw_ios_per_sec": 0, 00:14:03.427 "rw_mbytes_per_sec": 0, 00:14:03.427 "r_mbytes_per_sec": 0, 00:14:03.427 "w_mbytes_per_sec": 0 00:14:03.427 }, 00:14:03.427 "claimed": false, 00:14:03.427 "zoned": false, 00:14:03.427 "supported_io_types": { 00:14:03.427 "read": true, 00:14:03.427 "write": true, 00:14:03.427 "unmap": false, 00:14:03.427 "flush": false, 00:14:03.427 "reset": true, 00:14:03.427 "nvme_admin": false, 00:14:03.427 "nvme_io": false, 00:14:03.427 "nvme_io_md": false, 00:14:03.427 "write_zeroes": true, 00:14:03.427 "zcopy": false, 00:14:03.427 "get_zone_info": false, 00:14:03.427 "zone_management": false, 00:14:03.427 "zone_append": false, 00:14:03.427 "compare": false, 00:14:03.427 "compare_and_write": false, 00:14:03.427 "abort": false, 00:14:03.427 "seek_hole": false, 00:14:03.427 "seek_data": false, 00:14:03.427 "copy": false, 00:14:03.427 "nvme_iov_md": false 00:14:03.427 }, 00:14:03.427 "memory_domains": [ 00:14:03.427 { 00:14:03.427 "dma_device_id": "system", 00:14:03.427 "dma_device_type": 1 00:14:03.427 }, 00:14:03.427 { 00:14:03.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.427 "dma_device_type": 2 00:14:03.427 }, 00:14:03.427 { 00:14:03.427 "dma_device_id": "system", 00:14:03.427 "dma_device_type": 1 00:14:03.427 }, 00:14:03.427 { 00:14:03.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.427 "dma_device_type": 2 00:14:03.427 } 00:14:03.427 ], 00:14:03.427 "driver_specific": { 00:14:03.427 "raid": { 00:14:03.427 "uuid": "1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55", 00:14:03.427 "strip_size_kb": 0, 00:14:03.427 "state": "online", 00:14:03.427 "raid_level": "raid1", 00:14:03.427 "superblock": true, 00:14:03.427 "num_base_bdevs": 2, 00:14:03.427 "num_base_bdevs_discovered": 2, 00:14:03.427 "num_base_bdevs_operational": 2, 00:14:03.427 "base_bdevs_list": [ 00:14:03.427 { 00:14:03.427 "name": "pt1", 00:14:03.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:03.427 "is_configured": true, 00:14:03.427 "data_offset": 256, 00:14:03.427 "data_size": 7936 00:14:03.427 }, 00:14:03.427 { 00:14:03.427 "name": "pt2", 00:14:03.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.427 "is_configured": true, 00:14:03.427 "data_offset": 256, 00:14:03.427 "data_size": 7936 00:14:03.427 } 00:14:03.427 ] 00:14:03.427 } 00:14:03.427 } 00:14:03.427 }' 00:14:03.427 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:03.685 pt2' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.685 [2024-10-01 14:38:55.238025] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55 ']' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.685 [2024-10-01 14:38:55.261791] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.685 [2024-10-01 14:38:55.261886] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.685 [2024-10-01 14:38:55.261952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.685 [2024-10-01 14:38:55.262009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.685 [2024-10-01 14:38:55.262020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.685 [2024-10-01 14:38:55.349833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:03.685 [2024-10-01 14:38:55.351422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:03.685 [2024-10-01 14:38:55.351480] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:03.685 [2024-10-01 14:38:55.351526] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:03.685 [2024-10-01 14:38:55.351537] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.685 [2024-10-01 14:38:55.351546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:03.685 request: 00:14:03.685 { 00:14:03.685 "name": "raid_bdev1", 00:14:03.685 "raid_level": "raid1", 00:14:03.685 "base_bdevs": [ 00:14:03.685 "malloc1", 00:14:03.685 "malloc2" 00:14:03.685 ], 00:14:03.685 "superblock": false, 00:14:03.685 "method": "bdev_raid_create", 00:14:03.685 "req_id": 1 00:14:03.685 } 00:14:03.685 Got JSON-RPC error response 00:14:03.685 response: 00:14:03.685 { 00:14:03.685 "code": -17, 00:14:03.685 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:03.685 } 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.685 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.943 [2024-10-01 14:38:55.389819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:03.943 [2024-10-01 14:38:55.389862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.943 [2024-10-01 14:38:55.389875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:03.943 [2024-10-01 14:38:55.389884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.943 [2024-10-01 14:38:55.391728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.943 [2024-10-01 14:38:55.391758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:03.943 [2024-10-01 14:38:55.391817] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:03.943 [2024-10-01 14:38:55.391863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:03.943 pt1 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.943 "name": "raid_bdev1", 00:14:03.943 "uuid": "1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55", 00:14:03.943 "strip_size_kb": 0, 00:14:03.943 "state": "configuring", 00:14:03.943 "raid_level": "raid1", 00:14:03.943 "superblock": true, 00:14:03.943 "num_base_bdevs": 2, 00:14:03.943 "num_base_bdevs_discovered": 1, 00:14:03.943 "num_base_bdevs_operational": 2, 00:14:03.943 "base_bdevs_list": [ 00:14:03.943 { 00:14:03.943 "name": "pt1", 00:14:03.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:03.943 "is_configured": true, 00:14:03.943 "data_offset": 256, 00:14:03.943 "data_size": 7936 00:14:03.943 }, 00:14:03.943 { 00:14:03.943 "name": null, 00:14:03.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.943 "is_configured": false, 00:14:03.943 "data_offset": 256, 00:14:03.943 "data_size": 7936 00:14:03.943 } 00:14:03.943 ] 00:14:03.943 }' 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.943 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.201 [2024-10-01 14:38:55.709884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.201 [2024-10-01 14:38:55.709939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.201 [2024-10-01 14:38:55.709954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:04.201 [2024-10-01 14:38:55.709964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.201 [2024-10-01 14:38:55.710338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.201 [2024-10-01 14:38:55.710352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.201 [2024-10-01 14:38:55.710409] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:04.201 [2024-10-01 14:38:55.710425] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:04.201 [2024-10-01 14:38:55.710513] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:04.201 [2024-10-01 14:38:55.710522] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:04.201 [2024-10-01 14:38:55.710726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:04.201 [2024-10-01 14:38:55.710836] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:04.201 [2024-10-01 14:38:55.710848] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:04.201 [2024-10-01 14:38:55.710953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.201 pt2 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.201 "name": "raid_bdev1", 00:14:04.201 "uuid": "1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55", 00:14:04.201 "strip_size_kb": 0, 00:14:04.201 "state": "online", 00:14:04.201 "raid_level": "raid1", 00:14:04.201 "superblock": true, 00:14:04.201 "num_base_bdevs": 2, 00:14:04.201 "num_base_bdevs_discovered": 2, 00:14:04.201 "num_base_bdevs_operational": 2, 00:14:04.201 "base_bdevs_list": [ 00:14:04.201 { 00:14:04.201 "name": "pt1", 00:14:04.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:04.201 "is_configured": true, 00:14:04.201 "data_offset": 256, 00:14:04.201 "data_size": 7936 00:14:04.201 }, 00:14:04.201 { 00:14:04.201 "name": "pt2", 00:14:04.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.201 "is_configured": true, 00:14:04.201 "data_offset": 256, 00:14:04.201 "data_size": 7936 00:14:04.201 } 00:14:04.201 ] 00:14:04.201 }' 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.201 14:38:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.459 [2024-10-01 14:38:56.030183] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:04.459 "name": "raid_bdev1", 00:14:04.459 "aliases": [ 00:14:04.459 "1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55" 00:14:04.459 ], 00:14:04.459 "product_name": "Raid Volume", 00:14:04.459 "block_size": 4096, 00:14:04.459 "num_blocks": 7936, 00:14:04.459 "uuid": "1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55", 00:14:04.459 "assigned_rate_limits": { 00:14:04.459 "rw_ios_per_sec": 0, 00:14:04.459 "rw_mbytes_per_sec": 0, 00:14:04.459 "r_mbytes_per_sec": 0, 00:14:04.459 "w_mbytes_per_sec": 0 00:14:04.459 }, 00:14:04.459 "claimed": false, 00:14:04.459 "zoned": false, 00:14:04.459 "supported_io_types": { 00:14:04.459 "read": true, 00:14:04.459 "write": true, 00:14:04.459 "unmap": false, 00:14:04.459 "flush": false, 00:14:04.459 "reset": true, 00:14:04.459 "nvme_admin": false, 00:14:04.459 "nvme_io": false, 00:14:04.459 "nvme_io_md": false, 00:14:04.459 "write_zeroes": true, 00:14:04.459 "zcopy": false, 00:14:04.459 "get_zone_info": false, 00:14:04.459 "zone_management": false, 00:14:04.459 "zone_append": false, 00:14:04.459 "compare": false, 00:14:04.459 "compare_and_write": false, 00:14:04.459 "abort": false, 00:14:04.459 "seek_hole": false, 00:14:04.459 "seek_data": false, 00:14:04.459 "copy": false, 00:14:04.459 "nvme_iov_md": false 00:14:04.459 }, 00:14:04.459 "memory_domains": [ 00:14:04.459 { 00:14:04.459 "dma_device_id": "system", 00:14:04.459 "dma_device_type": 1 00:14:04.459 }, 00:14:04.459 { 00:14:04.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.459 "dma_device_type": 2 00:14:04.459 }, 00:14:04.459 { 00:14:04.459 "dma_device_id": "system", 00:14:04.459 "dma_device_type": 1 00:14:04.459 }, 00:14:04.459 { 00:14:04.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.459 "dma_device_type": 2 00:14:04.459 } 00:14:04.459 ], 00:14:04.459 "driver_specific": { 00:14:04.459 "raid": { 00:14:04.459 "uuid": "1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55", 00:14:04.459 "strip_size_kb": 0, 00:14:04.459 "state": "online", 00:14:04.459 "raid_level": "raid1", 00:14:04.459 "superblock": true, 00:14:04.459 "num_base_bdevs": 2, 00:14:04.459 "num_base_bdevs_discovered": 2, 00:14:04.459 "num_base_bdevs_operational": 2, 00:14:04.459 "base_bdevs_list": [ 00:14:04.459 { 00:14:04.459 "name": "pt1", 00:14:04.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:04.459 "is_configured": true, 00:14:04.459 "data_offset": 256, 00:14:04.459 "data_size": 7936 00:14:04.459 }, 00:14:04.459 { 00:14:04.459 "name": "pt2", 00:14:04.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.459 "is_configured": true, 00:14:04.459 "data_offset": 256, 00:14:04.459 "data_size": 7936 00:14:04.459 } 00:14:04.459 ] 00:14:04.459 } 00:14:04.459 } 00:14:04.459 }' 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:04.459 pt2' 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.459 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.716 [2024-10-01 14:38:56.198187] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55 '!=' 1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55 ']' 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.716 [2024-10-01 14:38:56.222015] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.716 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.717 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.717 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.717 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.717 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.717 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.717 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.717 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.717 "name": "raid_bdev1", 00:14:04.717 "uuid": "1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55", 00:14:04.717 "strip_size_kb": 0, 00:14:04.717 "state": "online", 00:14:04.717 "raid_level": "raid1", 00:14:04.717 "superblock": true, 00:14:04.717 "num_base_bdevs": 2, 00:14:04.717 "num_base_bdevs_discovered": 1, 00:14:04.717 "num_base_bdevs_operational": 1, 00:14:04.717 "base_bdevs_list": [ 00:14:04.717 { 00:14:04.717 "name": null, 00:14:04.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.717 "is_configured": false, 00:14:04.717 "data_offset": 0, 00:14:04.717 "data_size": 7936 00:14:04.717 }, 00:14:04.717 { 00:14:04.717 "name": "pt2", 00:14:04.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.717 "is_configured": true, 00:14:04.717 "data_offset": 256, 00:14:04.717 "data_size": 7936 00:14:04.717 } 00:14:04.717 ] 00:14:04.717 }' 00:14:04.717 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.717 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.974 [2024-10-01 14:38:56.530084] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.974 [2024-10-01 14:38:56.530112] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.974 [2024-10-01 14:38:56.530196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.974 [2024-10-01 14:38:56.530256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.974 [2024-10-01 14:38:56.530268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.974 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.974 [2024-10-01 14:38:56.578075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.974 [2024-10-01 14:38:56.578128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.974 [2024-10-01 14:38:56.578146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:04.974 [2024-10-01 14:38:56.578155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.974 [2024-10-01 14:38:56.580030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.975 [2024-10-01 14:38:56.580061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.975 [2024-10-01 14:38:56.580123] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:04.975 [2024-10-01 14:38:56.580159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:04.975 [2024-10-01 14:38:56.580233] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:04.975 [2024-10-01 14:38:56.580244] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:04.975 [2024-10-01 14:38:56.580438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:04.975 [2024-10-01 14:38:56.580546] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:04.975 [2024-10-01 14:38:56.580552] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:04.975 [2024-10-01 14:38:56.580667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.975 pt2 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.975 "name": "raid_bdev1", 00:14:04.975 "uuid": "1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55", 00:14:04.975 "strip_size_kb": 0, 00:14:04.975 "state": "online", 00:14:04.975 "raid_level": "raid1", 00:14:04.975 "superblock": true, 00:14:04.975 "num_base_bdevs": 2, 00:14:04.975 "num_base_bdevs_discovered": 1, 00:14:04.975 "num_base_bdevs_operational": 1, 00:14:04.975 "base_bdevs_list": [ 00:14:04.975 { 00:14:04.975 "name": null, 00:14:04.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.975 "is_configured": false, 00:14:04.975 "data_offset": 256, 00:14:04.975 "data_size": 7936 00:14:04.975 }, 00:14:04.975 { 00:14:04.975 "name": "pt2", 00:14:04.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.975 "is_configured": true, 00:14:04.975 "data_offset": 256, 00:14:04.975 "data_size": 7936 00:14:04.975 } 00:14:04.975 ] 00:14:04.975 }' 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.975 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:05.232 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:05.232 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.232 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:05.232 [2024-10-01 14:38:56.914149] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.232 [2024-10-01 14:38:56.914177] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.232 [2024-10-01 14:38:56.914248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.232 [2024-10-01 14:38:56.914300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.232 [2024-10-01 14:38:56.914310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:05.590 [2024-10-01 14:38:56.954196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:05.590 [2024-10-01 14:38:56.954251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.590 [2024-10-01 14:38:56.954272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:05.590 [2024-10-01 14:38:56.954282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.590 [2024-10-01 14:38:56.956184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.590 [2024-10-01 14:38:56.956216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:05.590 [2024-10-01 14:38:56.956283] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:05.590 [2024-10-01 14:38:56.956320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:05.590 [2024-10-01 14:38:56.956420] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:05.590 [2024-10-01 14:38:56.956428] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.590 [2024-10-01 14:38:56.956443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:05.590 [2024-10-01 14:38:56.956485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:05.590 [2024-10-01 14:38:56.956541] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:05.590 [2024-10-01 14:38:56.956548] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:05.590 [2024-10-01 14:38:56.956764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:05.590 [2024-10-01 14:38:56.956905] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:05.590 [2024-10-01 14:38:56.956916] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:05.590 [2024-10-01 14:38:56.957029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.590 pt1 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.590 "name": "raid_bdev1", 00:14:05.590 "uuid": "1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55", 00:14:05.590 "strip_size_kb": 0, 00:14:05.590 "state": "online", 00:14:05.590 "raid_level": "raid1", 00:14:05.590 "superblock": true, 00:14:05.590 "num_base_bdevs": 2, 00:14:05.590 "num_base_bdevs_discovered": 1, 00:14:05.590 "num_base_bdevs_operational": 1, 00:14:05.590 "base_bdevs_list": [ 00:14:05.590 { 00:14:05.590 "name": null, 00:14:05.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.590 "is_configured": false, 00:14:05.590 "data_offset": 256, 00:14:05.590 "data_size": 7936 00:14:05.590 }, 00:14:05.590 { 00:14:05.590 "name": "pt2", 00:14:05.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.590 "is_configured": true, 00:14:05.590 "data_offset": 256, 00:14:05.590 "data_size": 7936 00:14:05.590 } 00:14:05.590 ] 00:14:05.590 }' 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.590 14:38:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:05.861 [2024-10-01 14:38:57.302400] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55 '!=' 1b7e758a-7bc6-4ee7-b9ff-e002fe7aae55 ']' 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 83985 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 83985 ']' 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 83985 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83985 00:14:05.861 killing process with pid 83985 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83985' 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 83985 00:14:05.861 [2024-10-01 14:38:57.345162] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:05.861 [2024-10-01 14:38:57.345228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.861 14:38:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 83985 00:14:05.861 [2024-10-01 14:38:57.345263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.861 [2024-10-01 14:38:57.345278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:05.861 [2024-10-01 14:38:57.446177] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:06.427 ************************************ 00:14:06.427 END TEST raid_superblock_test_4k 00:14:06.427 ************************************ 00:14:06.427 14:38:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:14:06.427 00:14:06.427 real 0m4.368s 00:14:06.427 user 0m6.608s 00:14:06.427 sys 0m0.736s 00:14:06.427 14:38:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.427 14:38:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:06.686 14:38:58 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:14:06.686 14:38:58 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:14:06.686 14:38:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:06.686 14:38:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:06.686 14:38:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:06.686 ************************************ 00:14:06.686 START TEST raid_rebuild_test_sb_4k 00:14:06.686 ************************************ 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=84301 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 84301 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 84301 ']' 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:06.686 14:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:06.686 [2024-10-01 14:38:58.229345] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:14:06.686 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:06.686 Zero copy mechanism will not be used. 00:14:06.686 [2024-10-01 14:38:58.229837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84301 ] 00:14:06.945 [2024-10-01 14:38:58.377949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.945 [2024-10-01 14:38:58.584795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.204 [2024-10-01 14:38:58.738922] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.204 [2024-10-01 14:38:58.738975] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:07.462 BaseBdev1_malloc 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:07.462 [2024-10-01 14:38:59.116264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:07.462 [2024-10-01 14:38:59.116320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.462 [2024-10-01 14:38:59.116337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:07.462 [2024-10-01 14:38:59.116350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.462 [2024-10-01 14:38:59.118477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.462 [2024-10-01 14:38:59.118511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:07.462 BaseBdev1 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.462 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:07.722 BaseBdev2_malloc 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:07.722 [2024-10-01 14:38:59.169731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:07.722 [2024-10-01 14:38:59.169779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.722 [2024-10-01 14:38:59.169796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:07.722 [2024-10-01 14:38:59.169809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.722 [2024-10-01 14:38:59.171921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.722 [2024-10-01 14:38:59.171953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:07.722 BaseBdev2 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:07.722 spare_malloc 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:07.722 spare_delay 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:07.722 [2024-10-01 14:38:59.213552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:07.722 [2024-10-01 14:38:59.213597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.722 [2024-10-01 14:38:59.213614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:07.722 [2024-10-01 14:38:59.213624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.722 [2024-10-01 14:38:59.215752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.722 [2024-10-01 14:38:59.215779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:07.722 spare 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:07.722 [2024-10-01 14:38:59.221602] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.722 [2024-10-01 14:38:59.223422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.722 [2024-10-01 14:38:59.223590] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:07.722 [2024-10-01 14:38:59.223603] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:07.722 [2024-10-01 14:38:59.223880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:07.722 [2024-10-01 14:38:59.224023] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:07.722 [2024-10-01 14:38:59.224032] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:07.722 [2024-10-01 14:38:59.224161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.722 "name": "raid_bdev1", 00:14:07.722 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:07.722 "strip_size_kb": 0, 00:14:07.722 "state": "online", 00:14:07.722 "raid_level": "raid1", 00:14:07.722 "superblock": true, 00:14:07.722 "num_base_bdevs": 2, 00:14:07.722 "num_base_bdevs_discovered": 2, 00:14:07.722 "num_base_bdevs_operational": 2, 00:14:07.722 "base_bdevs_list": [ 00:14:07.722 { 00:14:07.722 "name": "BaseBdev1", 00:14:07.722 "uuid": "68e42254-74d0-5d6f-931b-7d9f7429e773", 00:14:07.722 "is_configured": true, 00:14:07.722 "data_offset": 256, 00:14:07.722 "data_size": 7936 00:14:07.722 }, 00:14:07.722 { 00:14:07.722 "name": "BaseBdev2", 00:14:07.722 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:07.722 "is_configured": true, 00:14:07.722 "data_offset": 256, 00:14:07.722 "data_size": 7936 00:14:07.722 } 00:14:07.722 ] 00:14:07.722 }' 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.722 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:07.982 [2024-10-01 14:38:59.553974] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.982 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:08.240 [2024-10-01 14:38:59.797785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:08.240 /dev/nbd0 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.240 1+0 records in 00:14:08.240 1+0 records out 00:14:08.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326576 s, 12.5 MB/s 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:08.240 14:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:14:09.181 7936+0 records in 00:14:09.181 7936+0 records out 00:14:09.181 32505856 bytes (33 MB, 31 MiB) copied, 0.73272 s, 44.4 MB/s 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:09.181 [2024-10-01 14:39:00.793864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:09.181 [2024-10-01 14:39:00.801941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.181 "name": "raid_bdev1", 00:14:09.181 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:09.181 "strip_size_kb": 0, 00:14:09.181 "state": "online", 00:14:09.181 "raid_level": "raid1", 00:14:09.181 "superblock": true, 00:14:09.181 "num_base_bdevs": 2, 00:14:09.181 "num_base_bdevs_discovered": 1, 00:14:09.181 "num_base_bdevs_operational": 1, 00:14:09.181 "base_bdevs_list": [ 00:14:09.181 { 00:14:09.181 "name": null, 00:14:09.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.181 "is_configured": false, 00:14:09.181 "data_offset": 0, 00:14:09.181 "data_size": 7936 00:14:09.181 }, 00:14:09.181 { 00:14:09.181 "name": "BaseBdev2", 00:14:09.181 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:09.181 "is_configured": true, 00:14:09.181 "data_offset": 256, 00:14:09.181 "data_size": 7936 00:14:09.181 } 00:14:09.181 ] 00:14:09.181 }' 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.181 14:39:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:09.442 14:39:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:09.442 14:39:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.442 14:39:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:09.442 [2024-10-01 14:39:01.118067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.701 [2024-10-01 14:39:01.128869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:14:09.701 14:39:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.701 14:39:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:09.701 [2024-10-01 14:39:01.130727] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.642 "name": "raid_bdev1", 00:14:10.642 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:10.642 "strip_size_kb": 0, 00:14:10.642 "state": "online", 00:14:10.642 "raid_level": "raid1", 00:14:10.642 "superblock": true, 00:14:10.642 "num_base_bdevs": 2, 00:14:10.642 "num_base_bdevs_discovered": 2, 00:14:10.642 "num_base_bdevs_operational": 2, 00:14:10.642 "process": { 00:14:10.642 "type": "rebuild", 00:14:10.642 "target": "spare", 00:14:10.642 "progress": { 00:14:10.642 "blocks": 2560, 00:14:10.642 "percent": 32 00:14:10.642 } 00:14:10.642 }, 00:14:10.642 "base_bdevs_list": [ 00:14:10.642 { 00:14:10.642 "name": "spare", 00:14:10.642 "uuid": "0268dbb9-1d97-5ebb-81e5-f33a8329c7a4", 00:14:10.642 "is_configured": true, 00:14:10.642 "data_offset": 256, 00:14:10.642 "data_size": 7936 00:14:10.642 }, 00:14:10.642 { 00:14:10.642 "name": "BaseBdev2", 00:14:10.642 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:10.642 "is_configured": true, 00:14:10.642 "data_offset": 256, 00:14:10.642 "data_size": 7936 00:14:10.642 } 00:14:10.642 ] 00:14:10.642 }' 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.642 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:10.642 [2024-10-01 14:39:02.252799] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.901 [2024-10-01 14:39:02.336562] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:10.901 [2024-10-01 14:39:02.336637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.901 [2024-10-01 14:39:02.336652] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.901 [2024-10-01 14:39:02.336662] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.901 "name": "raid_bdev1", 00:14:10.901 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:10.901 "strip_size_kb": 0, 00:14:10.901 "state": "online", 00:14:10.901 "raid_level": "raid1", 00:14:10.901 "superblock": true, 00:14:10.901 "num_base_bdevs": 2, 00:14:10.901 "num_base_bdevs_discovered": 1, 00:14:10.901 "num_base_bdevs_operational": 1, 00:14:10.901 "base_bdevs_list": [ 00:14:10.901 { 00:14:10.901 "name": null, 00:14:10.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.901 "is_configured": false, 00:14:10.901 "data_offset": 0, 00:14:10.901 "data_size": 7936 00:14:10.901 }, 00:14:10.901 { 00:14:10.901 "name": "BaseBdev2", 00:14:10.901 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:10.901 "is_configured": true, 00:14:10.901 "data_offset": 256, 00:14:10.901 "data_size": 7936 00:14:10.901 } 00:14:10.901 ] 00:14:10.901 }' 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.901 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.161 "name": "raid_bdev1", 00:14:11.161 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:11.161 "strip_size_kb": 0, 00:14:11.161 "state": "online", 00:14:11.161 "raid_level": "raid1", 00:14:11.161 "superblock": true, 00:14:11.161 "num_base_bdevs": 2, 00:14:11.161 "num_base_bdevs_discovered": 1, 00:14:11.161 "num_base_bdevs_operational": 1, 00:14:11.161 "base_bdevs_list": [ 00:14:11.161 { 00:14:11.161 "name": null, 00:14:11.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.161 "is_configured": false, 00:14:11.161 "data_offset": 0, 00:14:11.161 "data_size": 7936 00:14:11.161 }, 00:14:11.161 { 00:14:11.161 "name": "BaseBdev2", 00:14:11.161 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:11.161 "is_configured": true, 00:14:11.161 "data_offset": 256, 00:14:11.161 "data_size": 7936 00:14:11.161 } 00:14:11.161 ] 00:14:11.161 }' 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:11.161 [2024-10-01 14:39:02.767195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.161 [2024-10-01 14:39:02.777251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.161 14:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:11.161 [2024-10-01 14:39:02.779137] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.102 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.102 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.102 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.102 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.102 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.360 "name": "raid_bdev1", 00:14:12.360 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:12.360 "strip_size_kb": 0, 00:14:12.360 "state": "online", 00:14:12.360 "raid_level": "raid1", 00:14:12.360 "superblock": true, 00:14:12.360 "num_base_bdevs": 2, 00:14:12.360 "num_base_bdevs_discovered": 2, 00:14:12.360 "num_base_bdevs_operational": 2, 00:14:12.360 "process": { 00:14:12.360 "type": "rebuild", 00:14:12.360 "target": "spare", 00:14:12.360 "progress": { 00:14:12.360 "blocks": 2560, 00:14:12.360 "percent": 32 00:14:12.360 } 00:14:12.360 }, 00:14:12.360 "base_bdevs_list": [ 00:14:12.360 { 00:14:12.360 "name": "spare", 00:14:12.360 "uuid": "0268dbb9-1d97-5ebb-81e5-f33a8329c7a4", 00:14:12.360 "is_configured": true, 00:14:12.360 "data_offset": 256, 00:14:12.360 "data_size": 7936 00:14:12.360 }, 00:14:12.360 { 00:14:12.360 "name": "BaseBdev2", 00:14:12.360 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:12.360 "is_configured": true, 00:14:12.360 "data_offset": 256, 00:14:12.360 "data_size": 7936 00:14:12.360 } 00:14:12.360 ] 00:14:12.360 }' 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:12.360 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=563 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.360 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.360 "name": "raid_bdev1", 00:14:12.360 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:12.360 "strip_size_kb": 0, 00:14:12.360 "state": "online", 00:14:12.360 "raid_level": "raid1", 00:14:12.361 "superblock": true, 00:14:12.361 "num_base_bdevs": 2, 00:14:12.361 "num_base_bdevs_discovered": 2, 00:14:12.361 "num_base_bdevs_operational": 2, 00:14:12.361 "process": { 00:14:12.361 "type": "rebuild", 00:14:12.361 "target": "spare", 00:14:12.361 "progress": { 00:14:12.361 "blocks": 2816, 00:14:12.361 "percent": 35 00:14:12.361 } 00:14:12.361 }, 00:14:12.361 "base_bdevs_list": [ 00:14:12.361 { 00:14:12.361 "name": "spare", 00:14:12.361 "uuid": "0268dbb9-1d97-5ebb-81e5-f33a8329c7a4", 00:14:12.361 "is_configured": true, 00:14:12.361 "data_offset": 256, 00:14:12.361 "data_size": 7936 00:14:12.361 }, 00:14:12.361 { 00:14:12.361 "name": "BaseBdev2", 00:14:12.361 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:12.361 "is_configured": true, 00:14:12.361 "data_offset": 256, 00:14:12.361 "data_size": 7936 00:14:12.361 } 00:14:12.361 ] 00:14:12.361 }' 00:14:12.361 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.361 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.361 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.361 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.361 14:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:13.300 14:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.300 14:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.300 14:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.300 14:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.300 14:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.300 14:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.300 14:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.300 14:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.300 14:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.300 14:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:13.560 14:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.560 14:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.560 "name": "raid_bdev1", 00:14:13.560 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:13.560 "strip_size_kb": 0, 00:14:13.560 "state": "online", 00:14:13.560 "raid_level": "raid1", 00:14:13.560 "superblock": true, 00:14:13.560 "num_base_bdevs": 2, 00:14:13.560 "num_base_bdevs_discovered": 2, 00:14:13.560 "num_base_bdevs_operational": 2, 00:14:13.560 "process": { 00:14:13.560 "type": "rebuild", 00:14:13.560 "target": "spare", 00:14:13.560 "progress": { 00:14:13.560 "blocks": 5376, 00:14:13.560 "percent": 67 00:14:13.560 } 00:14:13.560 }, 00:14:13.560 "base_bdevs_list": [ 00:14:13.560 { 00:14:13.560 "name": "spare", 00:14:13.560 "uuid": "0268dbb9-1d97-5ebb-81e5-f33a8329c7a4", 00:14:13.560 "is_configured": true, 00:14:13.560 "data_offset": 256, 00:14:13.560 "data_size": 7936 00:14:13.560 }, 00:14:13.560 { 00:14:13.560 "name": "BaseBdev2", 00:14:13.560 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:13.560 "is_configured": true, 00:14:13.560 "data_offset": 256, 00:14:13.560 "data_size": 7936 00:14:13.560 } 00:14:13.560 ] 00:14:13.560 }' 00:14:13.560 14:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.560 14:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.560 14:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.560 14:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.560 14:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:14.498 [2024-10-01 14:39:05.894222] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:14.498 [2024-10-01 14:39:05.894311] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:14.498 [2024-10-01 14:39:05.894423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.498 "name": "raid_bdev1", 00:14:14.498 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:14.498 "strip_size_kb": 0, 00:14:14.498 "state": "online", 00:14:14.498 "raid_level": "raid1", 00:14:14.498 "superblock": true, 00:14:14.498 "num_base_bdevs": 2, 00:14:14.498 "num_base_bdevs_discovered": 2, 00:14:14.498 "num_base_bdevs_operational": 2, 00:14:14.498 "base_bdevs_list": [ 00:14:14.498 { 00:14:14.498 "name": "spare", 00:14:14.498 "uuid": "0268dbb9-1d97-5ebb-81e5-f33a8329c7a4", 00:14:14.498 "is_configured": true, 00:14:14.498 "data_offset": 256, 00:14:14.498 "data_size": 7936 00:14:14.498 }, 00:14:14.498 { 00:14:14.498 "name": "BaseBdev2", 00:14:14.498 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:14.498 "is_configured": true, 00:14:14.498 "data_offset": 256, 00:14:14.498 "data_size": 7936 00:14:14.498 } 00:14:14.498 ] 00:14:14.498 }' 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:14.498 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.759 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:14.759 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:14:14.759 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.759 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.759 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.759 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.759 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.759 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.759 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.759 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.759 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.759 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.759 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.759 "name": "raid_bdev1", 00:14:14.759 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:14.759 "strip_size_kb": 0, 00:14:14.759 "state": "online", 00:14:14.759 "raid_level": "raid1", 00:14:14.759 "superblock": true, 00:14:14.759 "num_base_bdevs": 2, 00:14:14.759 "num_base_bdevs_discovered": 2, 00:14:14.759 "num_base_bdevs_operational": 2, 00:14:14.759 "base_bdevs_list": [ 00:14:14.759 { 00:14:14.759 "name": "spare", 00:14:14.759 "uuid": "0268dbb9-1d97-5ebb-81e5-f33a8329c7a4", 00:14:14.759 "is_configured": true, 00:14:14.759 "data_offset": 256, 00:14:14.759 "data_size": 7936 00:14:14.760 }, 00:14:14.760 { 00:14:14.760 "name": "BaseBdev2", 00:14:14.760 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:14.760 "is_configured": true, 00:14:14.760 "data_offset": 256, 00:14:14.760 "data_size": 7936 00:14:14.760 } 00:14:14.760 ] 00:14:14.760 }' 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.760 "name": "raid_bdev1", 00:14:14.760 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:14.760 "strip_size_kb": 0, 00:14:14.760 "state": "online", 00:14:14.760 "raid_level": "raid1", 00:14:14.760 "superblock": true, 00:14:14.760 "num_base_bdevs": 2, 00:14:14.760 "num_base_bdevs_discovered": 2, 00:14:14.760 "num_base_bdevs_operational": 2, 00:14:14.760 "base_bdevs_list": [ 00:14:14.760 { 00:14:14.760 "name": "spare", 00:14:14.760 "uuid": "0268dbb9-1d97-5ebb-81e5-f33a8329c7a4", 00:14:14.760 "is_configured": true, 00:14:14.760 "data_offset": 256, 00:14:14.760 "data_size": 7936 00:14:14.760 }, 00:14:14.760 { 00:14:14.760 "name": "BaseBdev2", 00:14:14.760 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:14.760 "is_configured": true, 00:14:14.760 "data_offset": 256, 00:14:14.760 "data_size": 7936 00:14:14.760 } 00:14:14.760 ] 00:14:14.760 }' 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.760 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:15.020 [2024-10-01 14:39:06.588990] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:15.020 [2024-10-01 14:39:06.589019] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.020 [2024-10-01 14:39:06.589090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.020 [2024-10-01 14:39:06.589156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.020 [2024-10-01 14:39:06.589166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:15.020 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:15.278 /dev/nbd0 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.278 1+0 records in 00:14:15.278 1+0 records out 00:14:15.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608682 s, 6.7 MB/s 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:15.278 14:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:15.538 /dev/nbd1 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.538 1+0 records in 00:14:15.538 1+0 records out 00:14:15.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518632 s, 7.9 MB/s 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:15.538 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.798 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.058 [2024-10-01 14:39:07.704206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:16.058 [2024-10-01 14:39:07.704256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.058 [2024-10-01 14:39:07.704278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:16.058 [2024-10-01 14:39:07.704287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.058 [2024-10-01 14:39:07.706522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.058 [2024-10-01 14:39:07.706556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:16.058 [2024-10-01 14:39:07.706646] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:16.058 [2024-10-01 14:39:07.706694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.058 [2024-10-01 14:39:07.706838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.058 spare 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.058 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.318 [2024-10-01 14:39:07.806933] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:16.318 [2024-10-01 14:39:07.806986] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:16.318 [2024-10-01 14:39:07.807309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:14:16.318 [2024-10-01 14:39:07.807491] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:16.318 [2024-10-01 14:39:07.807501] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:16.318 [2024-10-01 14:39:07.807675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.318 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.318 "name": "raid_bdev1", 00:14:16.318 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:16.318 "strip_size_kb": 0, 00:14:16.318 "state": "online", 00:14:16.318 "raid_level": "raid1", 00:14:16.318 "superblock": true, 00:14:16.319 "num_base_bdevs": 2, 00:14:16.319 "num_base_bdevs_discovered": 2, 00:14:16.319 "num_base_bdevs_operational": 2, 00:14:16.319 "base_bdevs_list": [ 00:14:16.319 { 00:14:16.319 "name": "spare", 00:14:16.319 "uuid": "0268dbb9-1d97-5ebb-81e5-f33a8329c7a4", 00:14:16.319 "is_configured": true, 00:14:16.319 "data_offset": 256, 00:14:16.319 "data_size": 7936 00:14:16.319 }, 00:14:16.319 { 00:14:16.319 "name": "BaseBdev2", 00:14:16.319 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:16.319 "is_configured": true, 00:14:16.319 "data_offset": 256, 00:14:16.319 "data_size": 7936 00:14:16.319 } 00:14:16.319 ] 00:14:16.319 }' 00:14:16.319 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.319 14:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.579 "name": "raid_bdev1", 00:14:16.579 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:16.579 "strip_size_kb": 0, 00:14:16.579 "state": "online", 00:14:16.579 "raid_level": "raid1", 00:14:16.579 "superblock": true, 00:14:16.579 "num_base_bdevs": 2, 00:14:16.579 "num_base_bdevs_discovered": 2, 00:14:16.579 "num_base_bdevs_operational": 2, 00:14:16.579 "base_bdevs_list": [ 00:14:16.579 { 00:14:16.579 "name": "spare", 00:14:16.579 "uuid": "0268dbb9-1d97-5ebb-81e5-f33a8329c7a4", 00:14:16.579 "is_configured": true, 00:14:16.579 "data_offset": 256, 00:14:16.579 "data_size": 7936 00:14:16.579 }, 00:14:16.579 { 00:14:16.579 "name": "BaseBdev2", 00:14:16.579 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:16.579 "is_configured": true, 00:14:16.579 "data_offset": 256, 00:14:16.579 "data_size": 7936 00:14:16.579 } 00:14:16.579 ] 00:14:16.579 }' 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.579 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.580 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.580 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.580 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.580 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:16.580 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.840 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.840 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:16.840 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.840 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.840 [2024-10-01 14:39:08.276392] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.840 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.841 "name": "raid_bdev1", 00:14:16.841 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:16.841 "strip_size_kb": 0, 00:14:16.841 "state": "online", 00:14:16.841 "raid_level": "raid1", 00:14:16.841 "superblock": true, 00:14:16.841 "num_base_bdevs": 2, 00:14:16.841 "num_base_bdevs_discovered": 1, 00:14:16.841 "num_base_bdevs_operational": 1, 00:14:16.841 "base_bdevs_list": [ 00:14:16.841 { 00:14:16.841 "name": null, 00:14:16.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.841 "is_configured": false, 00:14:16.841 "data_offset": 0, 00:14:16.841 "data_size": 7936 00:14:16.841 }, 00:14:16.841 { 00:14:16.841 "name": "BaseBdev2", 00:14:16.841 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:16.841 "is_configured": true, 00:14:16.841 "data_offset": 256, 00:14:16.841 "data_size": 7936 00:14:16.841 } 00:14:16.841 ] 00:14:16.841 }' 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.841 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:17.101 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:17.101 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.101 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:17.101 [2024-10-01 14:39:08.612490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:17.101 [2024-10-01 14:39:08.612654] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:17.101 [2024-10-01 14:39:08.612671] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:17.101 [2024-10-01 14:39:08.612716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:17.101 [2024-10-01 14:39:08.622783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:14:17.101 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.101 14:39:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:17.101 [2024-10-01 14:39:08.624640] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.041 "name": "raid_bdev1", 00:14:18.041 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:18.041 "strip_size_kb": 0, 00:14:18.041 "state": "online", 00:14:18.041 "raid_level": "raid1", 00:14:18.041 "superblock": true, 00:14:18.041 "num_base_bdevs": 2, 00:14:18.041 "num_base_bdevs_discovered": 2, 00:14:18.041 "num_base_bdevs_operational": 2, 00:14:18.041 "process": { 00:14:18.041 "type": "rebuild", 00:14:18.041 "target": "spare", 00:14:18.041 "progress": { 00:14:18.041 "blocks": 2560, 00:14:18.041 "percent": 32 00:14:18.041 } 00:14:18.041 }, 00:14:18.041 "base_bdevs_list": [ 00:14:18.041 { 00:14:18.041 "name": "spare", 00:14:18.041 "uuid": "0268dbb9-1d97-5ebb-81e5-f33a8329c7a4", 00:14:18.041 "is_configured": true, 00:14:18.041 "data_offset": 256, 00:14:18.041 "data_size": 7936 00:14:18.041 }, 00:14:18.041 { 00:14:18.041 "name": "BaseBdev2", 00:14:18.041 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:18.041 "is_configured": true, 00:14:18.041 "data_offset": 256, 00:14:18.041 "data_size": 7936 00:14:18.041 } 00:14:18.041 ] 00:14:18.041 }' 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.041 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.301 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.301 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:18.301 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.301 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.301 [2024-10-01 14:39:09.730724] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.301 [2024-10-01 14:39:09.830805] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:18.301 [2024-10-01 14:39:09.830887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.301 [2024-10-01 14:39:09.830904] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.302 [2024-10-01 14:39:09.830914] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.302 "name": "raid_bdev1", 00:14:18.302 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:18.302 "strip_size_kb": 0, 00:14:18.302 "state": "online", 00:14:18.302 "raid_level": "raid1", 00:14:18.302 "superblock": true, 00:14:18.302 "num_base_bdevs": 2, 00:14:18.302 "num_base_bdevs_discovered": 1, 00:14:18.302 "num_base_bdevs_operational": 1, 00:14:18.302 "base_bdevs_list": [ 00:14:18.302 { 00:14:18.302 "name": null, 00:14:18.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.302 "is_configured": false, 00:14:18.302 "data_offset": 0, 00:14:18.302 "data_size": 7936 00:14:18.302 }, 00:14:18.302 { 00:14:18.302 "name": "BaseBdev2", 00:14:18.302 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:18.302 "is_configured": true, 00:14:18.302 "data_offset": 256, 00:14:18.302 "data_size": 7936 00:14:18.302 } 00:14:18.302 ] 00:14:18.302 }' 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.302 14:39:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.633 14:39:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:18.633 14:39:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.633 14:39:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:18.633 [2024-10-01 14:39:10.169343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:18.633 [2024-10-01 14:39:10.169407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.633 [2024-10-01 14:39:10.169428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:18.633 [2024-10-01 14:39:10.169439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.633 [2024-10-01 14:39:10.169915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.633 [2024-10-01 14:39:10.169941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:18.633 [2024-10-01 14:39:10.170030] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:18.633 [2024-10-01 14:39:10.170065] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:18.633 [2024-10-01 14:39:10.170075] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:18.633 [2024-10-01 14:39:10.170103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.633 [2024-10-01 14:39:10.179845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:14:18.633 spare 00:14:18.633 14:39:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.633 14:39:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:18.633 [2024-10-01 14:39:10.181820] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.574 "name": "raid_bdev1", 00:14:19.574 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:19.574 "strip_size_kb": 0, 00:14:19.574 "state": "online", 00:14:19.574 "raid_level": "raid1", 00:14:19.574 "superblock": true, 00:14:19.574 "num_base_bdevs": 2, 00:14:19.574 "num_base_bdevs_discovered": 2, 00:14:19.574 "num_base_bdevs_operational": 2, 00:14:19.574 "process": { 00:14:19.574 "type": "rebuild", 00:14:19.574 "target": "spare", 00:14:19.574 "progress": { 00:14:19.574 "blocks": 2560, 00:14:19.574 "percent": 32 00:14:19.574 } 00:14:19.574 }, 00:14:19.574 "base_bdevs_list": [ 00:14:19.574 { 00:14:19.574 "name": "spare", 00:14:19.574 "uuid": "0268dbb9-1d97-5ebb-81e5-f33a8329c7a4", 00:14:19.574 "is_configured": true, 00:14:19.574 "data_offset": 256, 00:14:19.574 "data_size": 7936 00:14:19.574 }, 00:14:19.574 { 00:14:19.574 "name": "BaseBdev2", 00:14:19.574 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:19.574 "is_configured": true, 00:14:19.574 "data_offset": 256, 00:14:19.574 "data_size": 7936 00:14:19.574 } 00:14:19.574 ] 00:14:19.574 }' 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.574 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:19.835 [2024-10-01 14:39:11.287855] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.835 [2024-10-01 14:39:11.387955] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:19.835 [2024-10-01 14:39:11.388030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.835 [2024-10-01 14:39:11.388048] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.835 [2024-10-01 14:39:11.388056] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.835 "name": "raid_bdev1", 00:14:19.835 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:19.835 "strip_size_kb": 0, 00:14:19.835 "state": "online", 00:14:19.835 "raid_level": "raid1", 00:14:19.835 "superblock": true, 00:14:19.835 "num_base_bdevs": 2, 00:14:19.835 "num_base_bdevs_discovered": 1, 00:14:19.835 "num_base_bdevs_operational": 1, 00:14:19.835 "base_bdevs_list": [ 00:14:19.835 { 00:14:19.835 "name": null, 00:14:19.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.835 "is_configured": false, 00:14:19.835 "data_offset": 0, 00:14:19.835 "data_size": 7936 00:14:19.835 }, 00:14:19.835 { 00:14:19.835 "name": "BaseBdev2", 00:14:19.835 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:19.835 "is_configured": true, 00:14:19.835 "data_offset": 256, 00:14:19.835 "data_size": 7936 00:14:19.835 } 00:14:19.835 ] 00:14:19.835 }' 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.835 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:20.096 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.096 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.096 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.096 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.097 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.097 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.097 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.097 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.097 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:20.097 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.358 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.358 "name": "raid_bdev1", 00:14:20.358 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:20.358 "strip_size_kb": 0, 00:14:20.358 "state": "online", 00:14:20.358 "raid_level": "raid1", 00:14:20.358 "superblock": true, 00:14:20.358 "num_base_bdevs": 2, 00:14:20.358 "num_base_bdevs_discovered": 1, 00:14:20.358 "num_base_bdevs_operational": 1, 00:14:20.358 "base_bdevs_list": [ 00:14:20.358 { 00:14:20.358 "name": null, 00:14:20.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.358 "is_configured": false, 00:14:20.358 "data_offset": 0, 00:14:20.358 "data_size": 7936 00:14:20.358 }, 00:14:20.358 { 00:14:20.358 "name": "BaseBdev2", 00:14:20.358 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:20.358 "is_configured": true, 00:14:20.358 "data_offset": 256, 00:14:20.358 "data_size": 7936 00:14:20.358 } 00:14:20.358 ] 00:14:20.358 }' 00:14:20.358 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.358 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.358 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.358 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.358 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:20.358 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.358 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:20.358 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.358 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:20.358 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.358 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:20.358 [2024-10-01 14:39:11.870173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:20.358 [2024-10-01 14:39:11.870222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.358 [2024-10-01 14:39:11.870241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:20.358 [2024-10-01 14:39:11.870249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.358 [2024-10-01 14:39:11.870666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.358 [2024-10-01 14:39:11.870680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:20.358 [2024-10-01 14:39:11.870774] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:20.358 [2024-10-01 14:39:11.870790] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:20.358 [2024-10-01 14:39:11.870799] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:20.358 [2024-10-01 14:39:11.870808] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:20.358 BaseBdev1 00:14:20.359 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.359 14:39:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.300 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.300 "name": "raid_bdev1", 00:14:21.300 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:21.300 "strip_size_kb": 0, 00:14:21.300 "state": "online", 00:14:21.300 "raid_level": "raid1", 00:14:21.300 "superblock": true, 00:14:21.300 "num_base_bdevs": 2, 00:14:21.300 "num_base_bdevs_discovered": 1, 00:14:21.300 "num_base_bdevs_operational": 1, 00:14:21.301 "base_bdevs_list": [ 00:14:21.301 { 00:14:21.301 "name": null, 00:14:21.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.301 "is_configured": false, 00:14:21.301 "data_offset": 0, 00:14:21.301 "data_size": 7936 00:14:21.301 }, 00:14:21.301 { 00:14:21.301 "name": "BaseBdev2", 00:14:21.301 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:21.301 "is_configured": true, 00:14:21.301 "data_offset": 256, 00:14:21.301 "data_size": 7936 00:14:21.301 } 00:14:21.301 ] 00:14:21.301 }' 00:14:21.301 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.301 14:39:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:21.561 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.561 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.561 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.561 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.562 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.562 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.562 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.562 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.562 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:21.562 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.562 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.562 "name": "raid_bdev1", 00:14:21.562 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:21.562 "strip_size_kb": 0, 00:14:21.562 "state": "online", 00:14:21.562 "raid_level": "raid1", 00:14:21.562 "superblock": true, 00:14:21.562 "num_base_bdevs": 2, 00:14:21.562 "num_base_bdevs_discovered": 1, 00:14:21.562 "num_base_bdevs_operational": 1, 00:14:21.562 "base_bdevs_list": [ 00:14:21.562 { 00:14:21.562 "name": null, 00:14:21.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.562 "is_configured": false, 00:14:21.562 "data_offset": 0, 00:14:21.562 "data_size": 7936 00:14:21.562 }, 00:14:21.562 { 00:14:21.562 "name": "BaseBdev2", 00:14:21.562 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:21.562 "is_configured": true, 00:14:21.562 "data_offset": 256, 00:14:21.562 "data_size": 7936 00:14:21.562 } 00:14:21.562 ] 00:14:21.562 }' 00:14:21.562 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.822 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.822 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.822 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.822 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:21.822 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:14:21.822 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:21.822 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:21.822 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.822 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:21.822 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.822 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:21.822 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.822 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:21.822 [2024-10-01 14:39:13.310592] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.822 [2024-10-01 14:39:13.310863] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:21.822 [2024-10-01 14:39:13.310885] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:21.822 request: 00:14:21.822 { 00:14:21.822 "base_bdev": "BaseBdev1", 00:14:21.822 "raid_bdev": "raid_bdev1", 00:14:21.822 "method": "bdev_raid_add_base_bdev", 00:14:21.822 "req_id": 1 00:14:21.822 } 00:14:21.822 Got JSON-RPC error response 00:14:21.822 response: 00:14:21.822 { 00:14:21.822 "code": -22, 00:14:21.822 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:21.822 } 00:14:21.823 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:21.823 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:14:21.823 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:21.823 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:21.823 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:21.823 14:39:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.766 "name": "raid_bdev1", 00:14:22.766 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:22.766 "strip_size_kb": 0, 00:14:22.766 "state": "online", 00:14:22.766 "raid_level": "raid1", 00:14:22.766 "superblock": true, 00:14:22.766 "num_base_bdevs": 2, 00:14:22.766 "num_base_bdevs_discovered": 1, 00:14:22.766 "num_base_bdevs_operational": 1, 00:14:22.766 "base_bdevs_list": [ 00:14:22.766 { 00:14:22.766 "name": null, 00:14:22.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.766 "is_configured": false, 00:14:22.766 "data_offset": 0, 00:14:22.766 "data_size": 7936 00:14:22.766 }, 00:14:22.766 { 00:14:22.766 "name": "BaseBdev2", 00:14:22.766 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:22.766 "is_configured": true, 00:14:22.766 "data_offset": 256, 00:14:22.766 "data_size": 7936 00:14:22.766 } 00:14:22.766 ] 00:14:22.766 }' 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.766 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:23.026 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.026 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.026 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.026 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.027 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.027 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.027 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.027 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.027 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:23.027 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.027 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.027 "name": "raid_bdev1", 00:14:23.027 "uuid": "8b2d41f8-ba8d-4f77-a8f1-6ccec5fa192a", 00:14:23.027 "strip_size_kb": 0, 00:14:23.027 "state": "online", 00:14:23.027 "raid_level": "raid1", 00:14:23.027 "superblock": true, 00:14:23.027 "num_base_bdevs": 2, 00:14:23.027 "num_base_bdevs_discovered": 1, 00:14:23.027 "num_base_bdevs_operational": 1, 00:14:23.027 "base_bdevs_list": [ 00:14:23.027 { 00:14:23.027 "name": null, 00:14:23.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.027 "is_configured": false, 00:14:23.027 "data_offset": 0, 00:14:23.027 "data_size": 7936 00:14:23.027 }, 00:14:23.027 { 00:14:23.027 "name": "BaseBdev2", 00:14:23.027 "uuid": "2f1b6baf-483a-5617-a04d-578ff9886d03", 00:14:23.027 "is_configured": true, 00:14:23.027 "data_offset": 256, 00:14:23.027 "data_size": 7936 00:14:23.027 } 00:14:23.027 ] 00:14:23.027 }' 00:14:23.027 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.027 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.027 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.287 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.287 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 84301 00:14:23.287 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 84301 ']' 00:14:23.287 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 84301 00:14:23.287 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:14:23.287 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.287 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84301 00:14:23.287 killing process with pid 84301 00:14:23.287 Received shutdown signal, test time was about 60.000000 seconds 00:14:23.287 00:14:23.287 Latency(us) 00:14:23.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.287 =================================================================================================================== 00:14:23.287 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:23.287 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:23.287 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:23.287 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84301' 00:14:23.287 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 84301 00:14:23.287 14:39:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 84301 00:14:23.287 [2024-10-01 14:39:14.750982] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.287 [2024-10-01 14:39:14.751106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.287 [2024-10-01 14:39:14.751154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.287 [2024-10-01 14:39:14.751166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:23.287 [2024-10-01 14:39:14.941651] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.232 14:39:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:14:24.232 00:14:24.232 real 0m17.581s 00:14:24.232 user 0m22.177s 00:14:24.232 sys 0m1.977s 00:14:24.232 ************************************ 00:14:24.232 END TEST raid_rebuild_test_sb_4k 00:14:24.232 ************************************ 00:14:24.232 14:39:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:24.232 14:39:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:24.232 14:39:15 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:14:24.232 14:39:15 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:14:24.232 14:39:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:24.232 14:39:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:24.232 14:39:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.232 ************************************ 00:14:24.232 START TEST raid_state_function_test_sb_md_separate 00:14:24.232 ************************************ 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=84965 00:14:24.232 Process raid pid: 84965 00:14:24.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84965' 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 84965 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 84965 ']' 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:24.232 14:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:24.232 [2024-10-01 14:39:15.859649] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:14:24.232 [2024-10-01 14:39:15.859921] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.491 [2024-10-01 14:39:16.007297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.752 [2024-10-01 14:39:16.194354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.752 [2024-10-01 14:39:16.331424] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.752 [2024-10-01 14:39:16.331474] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.319 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.319 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:14:25.319 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:25.319 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.319 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:25.319 [2024-10-01 14:39:16.711361] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.319 [2024-10-01 14:39:16.711413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.319 [2024-10-01 14:39:16.711424] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.319 [2024-10-01 14:39:16.711433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.319 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.319 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:25.319 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.320 "name": "Existed_Raid", 00:14:25.320 "uuid": "29413964-8e2c-4d7a-bef9-521556cb9b0e", 00:14:25.320 "strip_size_kb": 0, 00:14:25.320 "state": "configuring", 00:14:25.320 "raid_level": "raid1", 00:14:25.320 "superblock": true, 00:14:25.320 "num_base_bdevs": 2, 00:14:25.320 "num_base_bdevs_discovered": 0, 00:14:25.320 "num_base_bdevs_operational": 2, 00:14:25.320 "base_bdevs_list": [ 00:14:25.320 { 00:14:25.320 "name": "BaseBdev1", 00:14:25.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.320 "is_configured": false, 00:14:25.320 "data_offset": 0, 00:14:25.320 "data_size": 0 00:14:25.320 }, 00:14:25.320 { 00:14:25.320 "name": "BaseBdev2", 00:14:25.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.320 "is_configured": false, 00:14:25.320 "data_offset": 0, 00:14:25.320 "data_size": 0 00:14:25.320 } 00:14:25.320 ] 00:14:25.320 }' 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.320 14:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:25.579 [2024-10-01 14:39:17.047375] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.579 [2024-10-01 14:39:17.047408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:25.579 [2024-10-01 14:39:17.055394] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.579 [2024-10-01 14:39:17.055518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.579 [2024-10-01 14:39:17.055582] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.579 [2024-10-01 14:39:17.055615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:25.579 [2024-10-01 14:39:17.104963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.579 BaseBdev1 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.579 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:25.579 [ 00:14:25.579 { 00:14:25.579 "name": "BaseBdev1", 00:14:25.579 "aliases": [ 00:14:25.579 "1fc324f0-bb06-4421-a742-090ee96ffa26" 00:14:25.579 ], 00:14:25.579 "product_name": "Malloc disk", 00:14:25.579 "block_size": 4096, 00:14:25.579 "num_blocks": 8192, 00:14:25.579 "uuid": "1fc324f0-bb06-4421-a742-090ee96ffa26", 00:14:25.579 "md_size": 32, 00:14:25.579 "md_interleave": false, 00:14:25.579 "dif_type": 0, 00:14:25.579 "assigned_rate_limits": { 00:14:25.579 "rw_ios_per_sec": 0, 00:14:25.579 "rw_mbytes_per_sec": 0, 00:14:25.579 "r_mbytes_per_sec": 0, 00:14:25.579 "w_mbytes_per_sec": 0 00:14:25.579 }, 00:14:25.579 "claimed": true, 00:14:25.579 "claim_type": "exclusive_write", 00:14:25.579 "zoned": false, 00:14:25.579 "supported_io_types": { 00:14:25.579 "read": true, 00:14:25.579 "write": true, 00:14:25.579 "unmap": true, 00:14:25.579 "flush": true, 00:14:25.579 "reset": true, 00:14:25.579 "nvme_admin": false, 00:14:25.579 "nvme_io": false, 00:14:25.579 "nvme_io_md": false, 00:14:25.579 "write_zeroes": true, 00:14:25.580 "zcopy": true, 00:14:25.580 "get_zone_info": false, 00:14:25.580 "zone_management": false, 00:14:25.580 "zone_append": false, 00:14:25.580 "compare": false, 00:14:25.580 "compare_and_write": false, 00:14:25.580 "abort": true, 00:14:25.580 "seek_hole": false, 00:14:25.580 "seek_data": false, 00:14:25.580 "copy": true, 00:14:25.580 "nvme_iov_md": false 00:14:25.580 }, 00:14:25.580 "memory_domains": [ 00:14:25.580 { 00:14:25.580 "dma_device_id": "system", 00:14:25.580 "dma_device_type": 1 00:14:25.580 }, 00:14:25.580 { 00:14:25.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.580 "dma_device_type": 2 00:14:25.580 } 00:14:25.580 ], 00:14:25.580 "driver_specific": {} 00:14:25.580 } 00:14:25.580 ] 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.580 "name": "Existed_Raid", 00:14:25.580 "uuid": "2eed33a4-677f-4641-b157-85b01fad747b", 00:14:25.580 "strip_size_kb": 0, 00:14:25.580 "state": "configuring", 00:14:25.580 "raid_level": "raid1", 00:14:25.580 "superblock": true, 00:14:25.580 "num_base_bdevs": 2, 00:14:25.580 "num_base_bdevs_discovered": 1, 00:14:25.580 "num_base_bdevs_operational": 2, 00:14:25.580 "base_bdevs_list": [ 00:14:25.580 { 00:14:25.580 "name": "BaseBdev1", 00:14:25.580 "uuid": "1fc324f0-bb06-4421-a742-090ee96ffa26", 00:14:25.580 "is_configured": true, 00:14:25.580 "data_offset": 256, 00:14:25.580 "data_size": 7936 00:14:25.580 }, 00:14:25.580 { 00:14:25.580 "name": "BaseBdev2", 00:14:25.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.580 "is_configured": false, 00:14:25.580 "data_offset": 0, 00:14:25.580 "data_size": 0 00:14:25.580 } 00:14:25.580 ] 00:14:25.580 }' 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.580 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:25.845 [2024-10-01 14:39:17.441083] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.845 [2024-10-01 14:39:17.441222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:25.845 [2024-10-01 14:39:17.449149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.845 [2024-10-01 14:39:17.451035] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.845 [2024-10-01 14:39:17.451077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.845 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.845 "name": "Existed_Raid", 00:14:25.845 "uuid": "6c8b668c-1c32-49eb-9d44-7b91ecaa0d96", 00:14:25.845 "strip_size_kb": 0, 00:14:25.845 "state": "configuring", 00:14:25.845 "raid_level": "raid1", 00:14:25.845 "superblock": true, 00:14:25.845 "num_base_bdevs": 2, 00:14:25.845 "num_base_bdevs_discovered": 1, 00:14:25.845 "num_base_bdevs_operational": 2, 00:14:25.845 "base_bdevs_list": [ 00:14:25.845 { 00:14:25.845 "name": "BaseBdev1", 00:14:25.846 "uuid": "1fc324f0-bb06-4421-a742-090ee96ffa26", 00:14:25.846 "is_configured": true, 00:14:25.846 "data_offset": 256, 00:14:25.846 "data_size": 7936 00:14:25.846 }, 00:14:25.846 { 00:14:25.846 "name": "BaseBdev2", 00:14:25.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.846 "is_configured": false, 00:14:25.846 "data_offset": 0, 00:14:25.846 "data_size": 0 00:14:25.846 } 00:14:25.846 ] 00:14:25.846 }' 00:14:25.846 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.846 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:26.145 [2024-10-01 14:39:17.800607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.145 [2024-10-01 14:39:17.800984] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:26.145 [2024-10-01 14:39:17.801004] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:26.145 [2024-10-01 14:39:17.801083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:26.145 [2024-10-01 14:39:17.801185] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:26.145 [2024-10-01 14:39:17.801196] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:26.145 [2024-10-01 14:39:17.801281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.145 BaseBdev2 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.145 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:26.146 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.146 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:26.146 [ 00:14:26.146 { 00:14:26.146 "name": "BaseBdev2", 00:14:26.146 "aliases": [ 00:14:26.146 "3e600690-6723-4ed3-a319-6aedbb49be03" 00:14:26.146 ], 00:14:26.146 "product_name": "Malloc disk", 00:14:26.146 "block_size": 4096, 00:14:26.146 "num_blocks": 8192, 00:14:26.146 "uuid": "3e600690-6723-4ed3-a319-6aedbb49be03", 00:14:26.146 "md_size": 32, 00:14:26.146 "md_interleave": false, 00:14:26.146 "dif_type": 0, 00:14:26.146 "assigned_rate_limits": { 00:14:26.146 "rw_ios_per_sec": 0, 00:14:26.146 "rw_mbytes_per_sec": 0, 00:14:26.146 "r_mbytes_per_sec": 0, 00:14:26.146 "w_mbytes_per_sec": 0 00:14:26.146 }, 00:14:26.146 "claimed": true, 00:14:26.146 "claim_type": "exclusive_write", 00:14:26.146 "zoned": false, 00:14:26.146 "supported_io_types": { 00:14:26.146 "read": true, 00:14:26.146 "write": true, 00:14:26.146 "unmap": true, 00:14:26.146 "flush": true, 00:14:26.146 "reset": true, 00:14:26.146 "nvme_admin": false, 00:14:26.146 "nvme_io": false, 00:14:26.146 "nvme_io_md": false, 00:14:26.146 "write_zeroes": true, 00:14:26.146 "zcopy": true, 00:14:26.146 "get_zone_info": false, 00:14:26.146 "zone_management": false, 00:14:26.146 "zone_append": false, 00:14:26.146 "compare": false, 00:14:26.146 "compare_and_write": false, 00:14:26.146 "abort": true, 00:14:26.146 "seek_hole": false, 00:14:26.146 "seek_data": false, 00:14:26.146 "copy": true, 00:14:26.146 "nvme_iov_md": false 00:14:26.146 }, 00:14:26.146 "memory_domains": [ 00:14:26.146 { 00:14:26.146 "dma_device_id": "system", 00:14:26.146 "dma_device_type": 1 00:14:26.146 }, 00:14:26.146 { 00:14:26.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.407 "dma_device_type": 2 00:14:26.407 } 00:14:26.407 ], 00:14:26.407 "driver_specific": {} 00:14:26.407 } 00:14:26.407 ] 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.407 "name": "Existed_Raid", 00:14:26.407 "uuid": "6c8b668c-1c32-49eb-9d44-7b91ecaa0d96", 00:14:26.407 "strip_size_kb": 0, 00:14:26.407 "state": "online", 00:14:26.407 "raid_level": "raid1", 00:14:26.407 "superblock": true, 00:14:26.407 "num_base_bdevs": 2, 00:14:26.407 "num_base_bdevs_discovered": 2, 00:14:26.407 "num_base_bdevs_operational": 2, 00:14:26.407 "base_bdevs_list": [ 00:14:26.407 { 00:14:26.407 "name": "BaseBdev1", 00:14:26.407 "uuid": "1fc324f0-bb06-4421-a742-090ee96ffa26", 00:14:26.407 "is_configured": true, 00:14:26.407 "data_offset": 256, 00:14:26.407 "data_size": 7936 00:14:26.407 }, 00:14:26.407 { 00:14:26.407 "name": "BaseBdev2", 00:14:26.407 "uuid": "3e600690-6723-4ed3-a319-6aedbb49be03", 00:14:26.407 "is_configured": true, 00:14:26.407 "data_offset": 256, 00:14:26.407 "data_size": 7936 00:14:26.407 } 00:14:26.407 ] 00:14:26.407 }' 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.407 14:39:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:26.668 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:26.668 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:26.668 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:26.668 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:26.668 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:14:26.668 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:26.668 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:26.668 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:26.668 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:26.669 [2024-10-01 14:39:18.149080] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:26.669 "name": "Existed_Raid", 00:14:26.669 "aliases": [ 00:14:26.669 "6c8b668c-1c32-49eb-9d44-7b91ecaa0d96" 00:14:26.669 ], 00:14:26.669 "product_name": "Raid Volume", 00:14:26.669 "block_size": 4096, 00:14:26.669 "num_blocks": 7936, 00:14:26.669 "uuid": "6c8b668c-1c32-49eb-9d44-7b91ecaa0d96", 00:14:26.669 "md_size": 32, 00:14:26.669 "md_interleave": false, 00:14:26.669 "dif_type": 0, 00:14:26.669 "assigned_rate_limits": { 00:14:26.669 "rw_ios_per_sec": 0, 00:14:26.669 "rw_mbytes_per_sec": 0, 00:14:26.669 "r_mbytes_per_sec": 0, 00:14:26.669 "w_mbytes_per_sec": 0 00:14:26.669 }, 00:14:26.669 "claimed": false, 00:14:26.669 "zoned": false, 00:14:26.669 "supported_io_types": { 00:14:26.669 "read": true, 00:14:26.669 "write": true, 00:14:26.669 "unmap": false, 00:14:26.669 "flush": false, 00:14:26.669 "reset": true, 00:14:26.669 "nvme_admin": false, 00:14:26.669 "nvme_io": false, 00:14:26.669 "nvme_io_md": false, 00:14:26.669 "write_zeroes": true, 00:14:26.669 "zcopy": false, 00:14:26.669 "get_zone_info": false, 00:14:26.669 "zone_management": false, 00:14:26.669 "zone_append": false, 00:14:26.669 "compare": false, 00:14:26.669 "compare_and_write": false, 00:14:26.669 "abort": false, 00:14:26.669 "seek_hole": false, 00:14:26.669 "seek_data": false, 00:14:26.669 "copy": false, 00:14:26.669 "nvme_iov_md": false 00:14:26.669 }, 00:14:26.669 "memory_domains": [ 00:14:26.669 { 00:14:26.669 "dma_device_id": "system", 00:14:26.669 "dma_device_type": 1 00:14:26.669 }, 00:14:26.669 { 00:14:26.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.669 "dma_device_type": 2 00:14:26.669 }, 00:14:26.669 { 00:14:26.669 "dma_device_id": "system", 00:14:26.669 "dma_device_type": 1 00:14:26.669 }, 00:14:26.669 { 00:14:26.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.669 "dma_device_type": 2 00:14:26.669 } 00:14:26.669 ], 00:14:26.669 "driver_specific": { 00:14:26.669 "raid": { 00:14:26.669 "uuid": "6c8b668c-1c32-49eb-9d44-7b91ecaa0d96", 00:14:26.669 "strip_size_kb": 0, 00:14:26.669 "state": "online", 00:14:26.669 "raid_level": "raid1", 00:14:26.669 "superblock": true, 00:14:26.669 "num_base_bdevs": 2, 00:14:26.669 "num_base_bdevs_discovered": 2, 00:14:26.669 "num_base_bdevs_operational": 2, 00:14:26.669 "base_bdevs_list": [ 00:14:26.669 { 00:14:26.669 "name": "BaseBdev1", 00:14:26.669 "uuid": "1fc324f0-bb06-4421-a742-090ee96ffa26", 00:14:26.669 "is_configured": true, 00:14:26.669 "data_offset": 256, 00:14:26.669 "data_size": 7936 00:14:26.669 }, 00:14:26.669 { 00:14:26.669 "name": "BaseBdev2", 00:14:26.669 "uuid": "3e600690-6723-4ed3-a319-6aedbb49be03", 00:14:26.669 "is_configured": true, 00:14:26.669 "data_offset": 256, 00:14:26.669 "data_size": 7936 00:14:26.669 } 00:14:26.669 ] 00:14:26.669 } 00:14:26.669 } 00:14:26.669 }' 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:26.669 BaseBdev2' 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.669 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:26.669 [2024-10-01 14:39:18.328906] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.930 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.930 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.931 "name": "Existed_Raid", 00:14:26.931 "uuid": "6c8b668c-1c32-49eb-9d44-7b91ecaa0d96", 00:14:26.931 "strip_size_kb": 0, 00:14:26.931 "state": "online", 00:14:26.931 "raid_level": "raid1", 00:14:26.931 "superblock": true, 00:14:26.931 "num_base_bdevs": 2, 00:14:26.931 "num_base_bdevs_discovered": 1, 00:14:26.931 "num_base_bdevs_operational": 1, 00:14:26.931 "base_bdevs_list": [ 00:14:26.931 { 00:14:26.931 "name": null, 00:14:26.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.931 "is_configured": false, 00:14:26.931 "data_offset": 0, 00:14:26.931 "data_size": 7936 00:14:26.931 }, 00:14:26.931 { 00:14:26.931 "name": "BaseBdev2", 00:14:26.931 "uuid": "3e600690-6723-4ed3-a319-6aedbb49be03", 00:14:26.931 "is_configured": true, 00:14:26.931 "data_offset": 256, 00:14:26.931 "data_size": 7936 00:14:26.931 } 00:14:26.931 ] 00:14:26.931 }' 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.931 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 [2024-10-01 14:39:18.782841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:27.192 [2024-10-01 14:39:18.782941] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.192 [2024-10-01 14:39:18.847187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.192 [2024-10-01 14:39:18.847405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.192 [2024-10-01 14:39:18.847425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:27.192 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 84965 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 84965 ']' 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 84965 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84965 00:14:27.454 killing process with pid 84965 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84965' 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 84965 00:14:27.454 [2024-10-01 14:39:18.901726] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.454 14:39:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 84965 00:14:27.454 [2024-10-01 14:39:18.912176] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.398 14:39:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:14:28.398 ************************************ 00:14:28.398 END TEST raid_state_function_test_sb_md_separate 00:14:28.398 ************************************ 00:14:28.398 00:14:28.398 real 0m3.951s 00:14:28.398 user 0m5.623s 00:14:28.398 sys 0m0.597s 00:14:28.398 14:39:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:28.398 14:39:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:28.398 14:39:19 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:14:28.398 14:39:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:28.398 14:39:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:28.398 14:39:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.398 ************************************ 00:14:28.398 START TEST raid_superblock_test_md_separate 00:14:28.398 ************************************ 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=85206 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 85206 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 85206 ']' 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:28.398 14:39:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:28.398 [2024-10-01 14:39:19.884032] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:14:28.398 [2024-10-01 14:39:19.884328] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85206 ] 00:14:28.398 [2024-10-01 14:39:20.032445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.659 [2024-10-01 14:39:20.232613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.919 [2024-10-01 14:39:20.371539] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.919 [2024-10-01 14:39:20.371570] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.178 malloc1 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.178 [2024-10-01 14:39:20.842070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:29.178 [2024-10-01 14:39:20.842271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.178 [2024-10-01 14:39:20.842319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:29.178 [2024-10-01 14:39:20.842380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.178 [2024-10-01 14:39:20.844342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.178 [2024-10-01 14:39:20.844462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:29.178 pt1 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.178 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.439 malloc2 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.439 [2024-10-01 14:39:20.894548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.439 [2024-10-01 14:39:20.894602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.439 [2024-10-01 14:39:20.894625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:29.439 [2024-10-01 14:39:20.894635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.439 [2024-10-01 14:39:20.896556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.439 [2024-10-01 14:39:20.896589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.439 pt2 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.439 [2024-10-01 14:39:20.906607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:29.439 [2024-10-01 14:39:20.908539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.439 [2024-10-01 14:39:20.908732] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:29.439 [2024-10-01 14:39:20.908745] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:29.439 [2024-10-01 14:39:20.908824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:29.439 [2024-10-01 14:39:20.908940] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:29.439 [2024-10-01 14:39:20.908951] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:29.439 [2024-10-01 14:39:20.909048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.439 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.440 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.440 "name": "raid_bdev1", 00:14:29.440 "uuid": "46f3d7bc-e83b-4939-91ba-f2dd5751340d", 00:14:29.440 "strip_size_kb": 0, 00:14:29.440 "state": "online", 00:14:29.440 "raid_level": "raid1", 00:14:29.440 "superblock": true, 00:14:29.440 "num_base_bdevs": 2, 00:14:29.440 "num_base_bdevs_discovered": 2, 00:14:29.440 "num_base_bdevs_operational": 2, 00:14:29.440 "base_bdevs_list": [ 00:14:29.440 { 00:14:29.440 "name": "pt1", 00:14:29.440 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.440 "is_configured": true, 00:14:29.440 "data_offset": 256, 00:14:29.440 "data_size": 7936 00:14:29.440 }, 00:14:29.440 { 00:14:29.440 "name": "pt2", 00:14:29.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.440 "is_configured": true, 00:14:29.440 "data_offset": 256, 00:14:29.440 "data_size": 7936 00:14:29.440 } 00:14:29.440 ] 00:14:29.440 }' 00:14:29.440 14:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.440 14:39:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.701 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:29.701 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:29.701 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:29.701 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:29.701 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:14:29.701 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.702 [2024-10-01 14:39:21.226968] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.702 "name": "raid_bdev1", 00:14:29.702 "aliases": [ 00:14:29.702 "46f3d7bc-e83b-4939-91ba-f2dd5751340d" 00:14:29.702 ], 00:14:29.702 "product_name": "Raid Volume", 00:14:29.702 "block_size": 4096, 00:14:29.702 "num_blocks": 7936, 00:14:29.702 "uuid": "46f3d7bc-e83b-4939-91ba-f2dd5751340d", 00:14:29.702 "md_size": 32, 00:14:29.702 "md_interleave": false, 00:14:29.702 "dif_type": 0, 00:14:29.702 "assigned_rate_limits": { 00:14:29.702 "rw_ios_per_sec": 0, 00:14:29.702 "rw_mbytes_per_sec": 0, 00:14:29.702 "r_mbytes_per_sec": 0, 00:14:29.702 "w_mbytes_per_sec": 0 00:14:29.702 }, 00:14:29.702 "claimed": false, 00:14:29.702 "zoned": false, 00:14:29.702 "supported_io_types": { 00:14:29.702 "read": true, 00:14:29.702 "write": true, 00:14:29.702 "unmap": false, 00:14:29.702 "flush": false, 00:14:29.702 "reset": true, 00:14:29.702 "nvme_admin": false, 00:14:29.702 "nvme_io": false, 00:14:29.702 "nvme_io_md": false, 00:14:29.702 "write_zeroes": true, 00:14:29.702 "zcopy": false, 00:14:29.702 "get_zone_info": false, 00:14:29.702 "zone_management": false, 00:14:29.702 "zone_append": false, 00:14:29.702 "compare": false, 00:14:29.702 "compare_and_write": false, 00:14:29.702 "abort": false, 00:14:29.702 "seek_hole": false, 00:14:29.702 "seek_data": false, 00:14:29.702 "copy": false, 00:14:29.702 "nvme_iov_md": false 00:14:29.702 }, 00:14:29.702 "memory_domains": [ 00:14:29.702 { 00:14:29.702 "dma_device_id": "system", 00:14:29.702 "dma_device_type": 1 00:14:29.702 }, 00:14:29.702 { 00:14:29.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.702 "dma_device_type": 2 00:14:29.702 }, 00:14:29.702 { 00:14:29.702 "dma_device_id": "system", 00:14:29.702 "dma_device_type": 1 00:14:29.702 }, 00:14:29.702 { 00:14:29.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.702 "dma_device_type": 2 00:14:29.702 } 00:14:29.702 ], 00:14:29.702 "driver_specific": { 00:14:29.702 "raid": { 00:14:29.702 "uuid": "46f3d7bc-e83b-4939-91ba-f2dd5751340d", 00:14:29.702 "strip_size_kb": 0, 00:14:29.702 "state": "online", 00:14:29.702 "raid_level": "raid1", 00:14:29.702 "superblock": true, 00:14:29.702 "num_base_bdevs": 2, 00:14:29.702 "num_base_bdevs_discovered": 2, 00:14:29.702 "num_base_bdevs_operational": 2, 00:14:29.702 "base_bdevs_list": [ 00:14:29.702 { 00:14:29.702 "name": "pt1", 00:14:29.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.702 "is_configured": true, 00:14:29.702 "data_offset": 256, 00:14:29.702 "data_size": 7936 00:14:29.702 }, 00:14:29.702 { 00:14:29.702 "name": "pt2", 00:14:29.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.702 "is_configured": true, 00:14:29.702 "data_offset": 256, 00:14:29.702 "data_size": 7936 00:14:29.702 } 00:14:29.702 ] 00:14:29.702 } 00:14:29.702 } 00:14:29.702 }' 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:29.702 pt2' 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.702 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:29.702 [2024-10-01 14:39:21.382960] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=46f3d7bc-e83b-4939-91ba-f2dd5751340d 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 46f3d7bc-e83b-4939-91ba-f2dd5751340d ']' 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.969 [2024-10-01 14:39:21.414681] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:29.969 [2024-10-01 14:39:21.414828] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.969 [2024-10-01 14:39:21.414918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.969 [2024-10-01 14:39:21.414980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.969 [2024-10-01 14:39:21.414992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.969 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.970 [2024-10-01 14:39:21.506736] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:29.970 [2024-10-01 14:39:21.508597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:29.970 [2024-10-01 14:39:21.508671] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:29.970 [2024-10-01 14:39:21.508732] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:29.970 [2024-10-01 14:39:21.508748] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:29.970 [2024-10-01 14:39:21.508760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:29.970 request: 00:14:29.970 { 00:14:29.970 "name": "raid_bdev1", 00:14:29.970 "raid_level": "raid1", 00:14:29.970 "base_bdevs": [ 00:14:29.970 "malloc1", 00:14:29.970 "malloc2" 00:14:29.970 ], 00:14:29.970 "superblock": false, 00:14:29.970 "method": "bdev_raid_create", 00:14:29.970 "req_id": 1 00:14:29.970 } 00:14:29.970 Got JSON-RPC error response 00:14:29.970 response: 00:14:29.970 { 00:14:29.970 "code": -17, 00:14:29.970 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:29.970 } 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.970 [2024-10-01 14:39:21.550752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:29.970 [2024-10-01 14:39:21.550815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.970 [2024-10-01 14:39:21.550833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:29.970 [2024-10-01 14:39:21.550843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.970 [2024-10-01 14:39:21.552828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.970 [2024-10-01 14:39:21.552864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:29.970 [2024-10-01 14:39:21.552916] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:29.970 [2024-10-01 14:39:21.552964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:29.970 pt1 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.970 "name": "raid_bdev1", 00:14:29.970 "uuid": "46f3d7bc-e83b-4939-91ba-f2dd5751340d", 00:14:29.970 "strip_size_kb": 0, 00:14:29.970 "state": "configuring", 00:14:29.970 "raid_level": "raid1", 00:14:29.970 "superblock": true, 00:14:29.970 "num_base_bdevs": 2, 00:14:29.970 "num_base_bdevs_discovered": 1, 00:14:29.970 "num_base_bdevs_operational": 2, 00:14:29.970 "base_bdevs_list": [ 00:14:29.970 { 00:14:29.970 "name": "pt1", 00:14:29.970 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.970 "is_configured": true, 00:14:29.970 "data_offset": 256, 00:14:29.970 "data_size": 7936 00:14:29.970 }, 00:14:29.970 { 00:14:29.970 "name": null, 00:14:29.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.970 "is_configured": false, 00:14:29.970 "data_offset": 256, 00:14:29.970 "data_size": 7936 00:14:29.970 } 00:14:29.970 ] 00:14:29.970 }' 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.970 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:30.231 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:30.231 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:30.232 [2024-10-01 14:39:21.866807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:30.232 [2024-10-01 14:39:21.866872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.232 [2024-10-01 14:39:21.866892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:30.232 [2024-10-01 14:39:21.866903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.232 [2024-10-01 14:39:21.867108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.232 [2024-10-01 14:39:21.867124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:30.232 [2024-10-01 14:39:21.867168] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:30.232 [2024-10-01 14:39:21.867189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:30.232 [2024-10-01 14:39:21.867295] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:30.232 [2024-10-01 14:39:21.867306] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:30.232 [2024-10-01 14:39:21.867368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:30.232 [2024-10-01 14:39:21.867467] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:30.232 [2024-10-01 14:39:21.867475] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:30.232 [2024-10-01 14:39:21.867562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.232 pt2 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.232 "name": "raid_bdev1", 00:14:30.232 "uuid": "46f3d7bc-e83b-4939-91ba-f2dd5751340d", 00:14:30.232 "strip_size_kb": 0, 00:14:30.232 "state": "online", 00:14:30.232 "raid_level": "raid1", 00:14:30.232 "superblock": true, 00:14:30.232 "num_base_bdevs": 2, 00:14:30.232 "num_base_bdevs_discovered": 2, 00:14:30.232 "num_base_bdevs_operational": 2, 00:14:30.232 "base_bdevs_list": [ 00:14:30.232 { 00:14:30.232 "name": "pt1", 00:14:30.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:30.232 "is_configured": true, 00:14:30.232 "data_offset": 256, 00:14:30.232 "data_size": 7936 00:14:30.232 }, 00:14:30.232 { 00:14:30.232 "name": "pt2", 00:14:30.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.232 "is_configured": true, 00:14:30.232 "data_offset": 256, 00:14:30.232 "data_size": 7936 00:14:30.232 } 00:14:30.232 ] 00:14:30.232 }' 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.232 14:39:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:30.490 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:30.490 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:30.490 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:30.490 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:30.490 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:14:30.490 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:30.749 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:30.749 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.749 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.749 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:30.749 [2024-10-01 14:39:22.183150] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.749 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.749 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:30.749 "name": "raid_bdev1", 00:14:30.749 "aliases": [ 00:14:30.749 "46f3d7bc-e83b-4939-91ba-f2dd5751340d" 00:14:30.749 ], 00:14:30.749 "product_name": "Raid Volume", 00:14:30.749 "block_size": 4096, 00:14:30.749 "num_blocks": 7936, 00:14:30.749 "uuid": "46f3d7bc-e83b-4939-91ba-f2dd5751340d", 00:14:30.749 "md_size": 32, 00:14:30.749 "md_interleave": false, 00:14:30.749 "dif_type": 0, 00:14:30.749 "assigned_rate_limits": { 00:14:30.749 "rw_ios_per_sec": 0, 00:14:30.749 "rw_mbytes_per_sec": 0, 00:14:30.749 "r_mbytes_per_sec": 0, 00:14:30.749 "w_mbytes_per_sec": 0 00:14:30.749 }, 00:14:30.749 "claimed": false, 00:14:30.749 "zoned": false, 00:14:30.749 "supported_io_types": { 00:14:30.749 "read": true, 00:14:30.749 "write": true, 00:14:30.749 "unmap": false, 00:14:30.749 "flush": false, 00:14:30.749 "reset": true, 00:14:30.749 "nvme_admin": false, 00:14:30.749 "nvme_io": false, 00:14:30.749 "nvme_io_md": false, 00:14:30.749 "write_zeroes": true, 00:14:30.750 "zcopy": false, 00:14:30.750 "get_zone_info": false, 00:14:30.750 "zone_management": false, 00:14:30.750 "zone_append": false, 00:14:30.750 "compare": false, 00:14:30.750 "compare_and_write": false, 00:14:30.750 "abort": false, 00:14:30.750 "seek_hole": false, 00:14:30.750 "seek_data": false, 00:14:30.750 "copy": false, 00:14:30.750 "nvme_iov_md": false 00:14:30.750 }, 00:14:30.750 "memory_domains": [ 00:14:30.750 { 00:14:30.750 "dma_device_id": "system", 00:14:30.750 "dma_device_type": 1 00:14:30.750 }, 00:14:30.750 { 00:14:30.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.750 "dma_device_type": 2 00:14:30.750 }, 00:14:30.750 { 00:14:30.750 "dma_device_id": "system", 00:14:30.750 "dma_device_type": 1 00:14:30.750 }, 00:14:30.750 { 00:14:30.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.750 "dma_device_type": 2 00:14:30.750 } 00:14:30.750 ], 00:14:30.750 "driver_specific": { 00:14:30.750 "raid": { 00:14:30.750 "uuid": "46f3d7bc-e83b-4939-91ba-f2dd5751340d", 00:14:30.750 "strip_size_kb": 0, 00:14:30.750 "state": "online", 00:14:30.750 "raid_level": "raid1", 00:14:30.750 "superblock": true, 00:14:30.750 "num_base_bdevs": 2, 00:14:30.750 "num_base_bdevs_discovered": 2, 00:14:30.750 "num_base_bdevs_operational": 2, 00:14:30.750 "base_bdevs_list": [ 00:14:30.750 { 00:14:30.750 "name": "pt1", 00:14:30.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:30.750 "is_configured": true, 00:14:30.750 "data_offset": 256, 00:14:30.750 "data_size": 7936 00:14:30.750 }, 00:14:30.750 { 00:14:30.750 "name": "pt2", 00:14:30.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.750 "is_configured": true, 00:14:30.750 "data_offset": 256, 00:14:30.750 "data_size": 7936 00:14:30.750 } 00:14:30.750 ] 00:14:30.750 } 00:14:30.750 } 00:14:30.750 }' 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:30.750 pt2' 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:30.750 [2024-10-01 14:39:22.347186] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 46f3d7bc-e83b-4939-91ba-f2dd5751340d '!=' 46f3d7bc-e83b-4939-91ba-f2dd5751340d ']' 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:30.750 [2024-10-01 14:39:22.378966] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.750 "name": "raid_bdev1", 00:14:30.750 "uuid": "46f3d7bc-e83b-4939-91ba-f2dd5751340d", 00:14:30.750 "strip_size_kb": 0, 00:14:30.750 "state": "online", 00:14:30.750 "raid_level": "raid1", 00:14:30.750 "superblock": true, 00:14:30.750 "num_base_bdevs": 2, 00:14:30.750 "num_base_bdevs_discovered": 1, 00:14:30.750 "num_base_bdevs_operational": 1, 00:14:30.750 "base_bdevs_list": [ 00:14:30.750 { 00:14:30.750 "name": null, 00:14:30.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.750 "is_configured": false, 00:14:30.750 "data_offset": 0, 00:14:30.750 "data_size": 7936 00:14:30.750 }, 00:14:30.750 { 00:14:30.750 "name": "pt2", 00:14:30.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.750 "is_configured": true, 00:14:30.750 "data_offset": 256, 00:14:30.750 "data_size": 7936 00:14:30.750 } 00:14:30.750 ] 00:14:30.750 }' 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.750 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.320 [2024-10-01 14:39:22.707001] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.320 [2024-10-01 14:39:22.707025] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.320 [2024-10-01 14:39:22.707087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.320 [2024-10-01 14:39:22.707130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.320 [2024-10-01 14:39:22.707140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.320 [2024-10-01 14:39:22.759026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:31.320 [2024-10-01 14:39:22.759078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.320 [2024-10-01 14:39:22.759093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:31.320 [2024-10-01 14:39:22.759103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.320 [2024-10-01 14:39:22.761108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.320 [2024-10-01 14:39:22.761253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:31.320 [2024-10-01 14:39:22.761311] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:31.320 [2024-10-01 14:39:22.761358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:31.320 [2024-10-01 14:39:22.761448] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:31.320 [2024-10-01 14:39:22.761461] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:31.320 [2024-10-01 14:39:22.761536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:31.320 [2024-10-01 14:39:22.761634] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:31.320 [2024-10-01 14:39:22.761642] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:31.320 [2024-10-01 14:39:22.761745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.320 pt2 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.320 "name": "raid_bdev1", 00:14:31.320 "uuid": "46f3d7bc-e83b-4939-91ba-f2dd5751340d", 00:14:31.320 "strip_size_kb": 0, 00:14:31.320 "state": "online", 00:14:31.320 "raid_level": "raid1", 00:14:31.320 "superblock": true, 00:14:31.320 "num_base_bdevs": 2, 00:14:31.320 "num_base_bdevs_discovered": 1, 00:14:31.320 "num_base_bdevs_operational": 1, 00:14:31.320 "base_bdevs_list": [ 00:14:31.320 { 00:14:31.320 "name": null, 00:14:31.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.320 "is_configured": false, 00:14:31.320 "data_offset": 256, 00:14:31.320 "data_size": 7936 00:14:31.320 }, 00:14:31.320 { 00:14:31.320 "name": "pt2", 00:14:31.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.320 "is_configured": true, 00:14:31.320 "data_offset": 256, 00:14:31.320 "data_size": 7936 00:14:31.320 } 00:14:31.320 ] 00:14:31.320 }' 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.320 14:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.581 [2024-10-01 14:39:23.075073] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.581 [2024-10-01 14:39:23.075098] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.581 [2024-10-01 14:39:23.075161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.581 [2024-10-01 14:39:23.075209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.581 [2024-10-01 14:39:23.075219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:31.581 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.582 [2024-10-01 14:39:23.111105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:31.582 [2024-10-01 14:39:23.111157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.582 [2024-10-01 14:39:23.111175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:31.582 [2024-10-01 14:39:23.111184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.582 [2024-10-01 14:39:23.113169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.582 [2024-10-01 14:39:23.113203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:31.582 [2024-10-01 14:39:23.113256] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:31.582 [2024-10-01 14:39:23.113295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:31.582 [2024-10-01 14:39:23.113408] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:31.582 [2024-10-01 14:39:23.113418] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.582 [2024-10-01 14:39:23.113438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:31.582 [2024-10-01 14:39:23.113490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:31.582 [2024-10-01 14:39:23.113551] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:31.582 [2024-10-01 14:39:23.113560] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:31.582 [2024-10-01 14:39:23.113630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:31.582 [2024-10-01 14:39:23.113737] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:31.582 [2024-10-01 14:39:23.113748] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:31.582 [2024-10-01 14:39:23.113843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.582 pt1 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.582 "name": "raid_bdev1", 00:14:31.582 "uuid": "46f3d7bc-e83b-4939-91ba-f2dd5751340d", 00:14:31.582 "strip_size_kb": 0, 00:14:31.582 "state": "online", 00:14:31.582 "raid_level": "raid1", 00:14:31.582 "superblock": true, 00:14:31.582 "num_base_bdevs": 2, 00:14:31.582 "num_base_bdevs_discovered": 1, 00:14:31.582 "num_base_bdevs_operational": 1, 00:14:31.582 "base_bdevs_list": [ 00:14:31.582 { 00:14:31.582 "name": null, 00:14:31.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.582 "is_configured": false, 00:14:31.582 "data_offset": 256, 00:14:31.582 "data_size": 7936 00:14:31.582 }, 00:14:31.582 { 00:14:31.582 "name": "pt2", 00:14:31.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.582 "is_configured": true, 00:14:31.582 "data_offset": 256, 00:14:31.582 "data_size": 7936 00:14:31.582 } 00:14:31.582 ] 00:14:31.582 }' 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.582 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:31.840 [2024-10-01 14:39:23.479441] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 46f3d7bc-e83b-4939-91ba-f2dd5751340d '!=' 46f3d7bc-e83b-4939-91ba-f2dd5751340d ']' 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 85206 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 85206 ']' 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 85206 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:31.840 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85206 00:14:32.100 killing process with pid 85206 00:14:32.100 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:32.100 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:32.100 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85206' 00:14:32.100 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 85206 00:14:32.100 14:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 85206 00:14:32.100 [2024-10-01 14:39:23.529061] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.100 [2024-10-01 14:39:23.529138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.100 [2024-10-01 14:39:23.529182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.100 [2024-10-01 14:39:23.529195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:32.100 [2024-10-01 14:39:23.665002] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.040 14:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:14:33.040 00:14:33.040 real 0m4.653s 00:14:33.040 user 0m6.944s 00:14:33.040 sys 0m0.766s 00:14:33.040 14:39:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.040 ************************************ 00:14:33.040 END TEST raid_superblock_test_md_separate 00:14:33.040 ************************************ 00:14:33.040 14:39:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:33.040 14:39:24 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:14:33.040 14:39:24 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:14:33.040 14:39:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:33.040 14:39:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.040 14:39:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.040 ************************************ 00:14:33.040 START TEST raid_rebuild_test_sb_md_separate 00:14:33.040 ************************************ 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:33.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=85512 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 85512 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 85512 ']' 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:33.040 14:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:33.040 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:33.040 Zero copy mechanism will not be used. 00:14:33.040 [2024-10-01 14:39:24.606618] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:14:33.040 [2024-10-01 14:39:24.606758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85512 ] 00:14:33.301 [2024-10-01 14:39:24.758159] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.301 [2024-10-01 14:39:24.947039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.561 [2024-10-01 14:39:25.082768] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.561 [2024-10-01 14:39:25.082818] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.822 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:33.822 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:14:33.822 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:33.822 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:14:33.822 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.822 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:33.822 BaseBdev1_malloc 00:14:33.822 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.822 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:33.822 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.822 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:33.822 [2024-10-01 14:39:25.496000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:33.822 [2024-10-01 14:39:25.496054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.822 [2024-10-01 14:39:25.496079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:33.822 [2024-10-01 14:39:25.496090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.823 [2024-10-01 14:39:25.497999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.823 [2024-10-01 14:39:25.498034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:33.823 BaseBdev1 00:14:33.823 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.823 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:33.823 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:14:33.823 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.823 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:34.084 BaseBdev2_malloc 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:34.084 [2024-10-01 14:39:25.547932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:34.084 [2024-10-01 14:39:25.547988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.084 [2024-10-01 14:39:25.548006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:34.084 [2024-10-01 14:39:25.548017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.084 [2024-10-01 14:39:25.549905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.084 [2024-10-01 14:39:25.549938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:34.084 BaseBdev2 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:34.084 spare_malloc 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:34.084 spare_delay 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:34.084 [2024-10-01 14:39:25.596181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:34.084 [2024-10-01 14:39:25.596231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.084 [2024-10-01 14:39:25.596250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:34.084 [2024-10-01 14:39:25.596261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.084 [2024-10-01 14:39:25.598174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.084 [2024-10-01 14:39:25.598208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:34.084 spare 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:34.084 [2024-10-01 14:39:25.604232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.084 [2024-10-01 14:39:25.606047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.084 [2024-10-01 14:39:25.606225] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:34.084 [2024-10-01 14:39:25.606239] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:34.084 [2024-10-01 14:39:25.606314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:34.084 [2024-10-01 14:39:25.606430] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:34.084 [2024-10-01 14:39:25.606438] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:34.084 [2024-10-01 14:39:25.606539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.084 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.084 "name": "raid_bdev1", 00:14:34.084 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:34.084 "strip_size_kb": 0, 00:14:34.084 "state": "online", 00:14:34.084 "raid_level": "raid1", 00:14:34.084 "superblock": true, 00:14:34.084 "num_base_bdevs": 2, 00:14:34.084 "num_base_bdevs_discovered": 2, 00:14:34.084 "num_base_bdevs_operational": 2, 00:14:34.084 "base_bdevs_list": [ 00:14:34.084 { 00:14:34.084 "name": "BaseBdev1", 00:14:34.084 "uuid": "37af9335-428f-5702-85cb-caeb0c3e39be", 00:14:34.084 "is_configured": true, 00:14:34.084 "data_offset": 256, 00:14:34.084 "data_size": 7936 00:14:34.084 }, 00:14:34.084 { 00:14:34.084 "name": "BaseBdev2", 00:14:34.084 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:34.085 "is_configured": true, 00:14:34.085 "data_offset": 256, 00:14:34.085 "data_size": 7936 00:14:34.085 } 00:14:34.085 ] 00:14:34.085 }' 00:14:34.085 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.085 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:34.343 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:34.343 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.343 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:34.343 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:34.343 [2024-10-01 14:39:25.936590] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.343 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.343 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:14:34.343 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:14:34.344 14:39:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:34.344 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:34.344 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:34.602 [2024-10-01 14:39:26.192429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:34.602 /dev/nbd0 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:34.602 1+0 records in 00:14:34.602 1+0 records out 00:14:34.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552324 s, 7.4 MB/s 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:34.602 14:39:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:14:35.985 7936+0 records in 00:14:35.985 7936+0 records out 00:14:35.985 32505856 bytes (33 MB, 31 MiB) copied, 1.07462 s, 30.2 MB/s 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:35.985 [2024-10-01 14:39:27.539465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:35.985 [2024-10-01 14:39:27.547541] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.985 "name": "raid_bdev1", 00:14:35.985 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:35.985 "strip_size_kb": 0, 00:14:35.985 "state": "online", 00:14:35.985 "raid_level": "raid1", 00:14:35.985 "superblock": true, 00:14:35.985 "num_base_bdevs": 2, 00:14:35.985 "num_base_bdevs_discovered": 1, 00:14:35.985 "num_base_bdevs_operational": 1, 00:14:35.985 "base_bdevs_list": [ 00:14:35.985 { 00:14:35.985 "name": null, 00:14:35.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.985 "is_configured": false, 00:14:35.985 "data_offset": 0, 00:14:35.985 "data_size": 7936 00:14:35.985 }, 00:14:35.985 { 00:14:35.985 "name": "BaseBdev2", 00:14:35.985 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:35.985 "is_configured": true, 00:14:35.985 "data_offset": 256, 00:14:35.985 "data_size": 7936 00:14:35.985 } 00:14:35.985 ] 00:14:35.985 }' 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.985 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.246 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:36.246 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.246 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:36.246 [2024-10-01 14:39:27.875645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.246 [2024-10-01 14:39:27.885564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:14:36.246 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.246 14:39:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:36.246 [2024-10-01 14:39:27.887424] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:37.630 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.630 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.630 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.630 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.630 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.630 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.630 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.630 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.630 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.630 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.630 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.630 "name": "raid_bdev1", 00:14:37.630 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:37.631 "strip_size_kb": 0, 00:14:37.631 "state": "online", 00:14:37.631 "raid_level": "raid1", 00:14:37.631 "superblock": true, 00:14:37.631 "num_base_bdevs": 2, 00:14:37.631 "num_base_bdevs_discovered": 2, 00:14:37.631 "num_base_bdevs_operational": 2, 00:14:37.631 "process": { 00:14:37.631 "type": "rebuild", 00:14:37.631 "target": "spare", 00:14:37.631 "progress": { 00:14:37.631 "blocks": 2560, 00:14:37.631 "percent": 32 00:14:37.631 } 00:14:37.631 }, 00:14:37.631 "base_bdevs_list": [ 00:14:37.631 { 00:14:37.631 "name": "spare", 00:14:37.631 "uuid": "cac7fa46-f797-582c-ab30-268f05cf74b8", 00:14:37.631 "is_configured": true, 00:14:37.631 "data_offset": 256, 00:14:37.631 "data_size": 7936 00:14:37.631 }, 00:14:37.631 { 00:14:37.631 "name": "BaseBdev2", 00:14:37.631 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:37.631 "is_configured": true, 00:14:37.631 "data_offset": 256, 00:14:37.631 "data_size": 7936 00:14:37.631 } 00:14:37.631 ] 00:14:37.631 }' 00:14:37.631 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.631 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.631 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.631 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.631 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:37.631 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.631 14:39:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.631 [2024-10-01 14:39:28.993520] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.631 [2024-10-01 14:39:29.093501] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:37.631 [2024-10-01 14:39:29.093594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.631 [2024-10-01 14:39:29.093610] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.631 [2024-10-01 14:39:29.093626] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.631 "name": "raid_bdev1", 00:14:37.631 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:37.631 "strip_size_kb": 0, 00:14:37.631 "state": "online", 00:14:37.631 "raid_level": "raid1", 00:14:37.631 "superblock": true, 00:14:37.631 "num_base_bdevs": 2, 00:14:37.631 "num_base_bdevs_discovered": 1, 00:14:37.631 "num_base_bdevs_operational": 1, 00:14:37.631 "base_bdevs_list": [ 00:14:37.631 { 00:14:37.631 "name": null, 00:14:37.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.631 "is_configured": false, 00:14:37.631 "data_offset": 0, 00:14:37.631 "data_size": 7936 00:14:37.631 }, 00:14:37.631 { 00:14:37.631 "name": "BaseBdev2", 00:14:37.631 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:37.631 "is_configured": true, 00:14:37.631 "data_offset": 256, 00:14:37.631 "data_size": 7936 00:14:37.631 } 00:14:37.631 ] 00:14:37.631 }' 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.631 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.890 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.890 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.890 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.890 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.890 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.890 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.890 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.890 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.890 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.890 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.890 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.890 "name": "raid_bdev1", 00:14:37.890 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:37.890 "strip_size_kb": 0, 00:14:37.890 "state": "online", 00:14:37.890 "raid_level": "raid1", 00:14:37.890 "superblock": true, 00:14:37.890 "num_base_bdevs": 2, 00:14:37.890 "num_base_bdevs_discovered": 1, 00:14:37.890 "num_base_bdevs_operational": 1, 00:14:37.890 "base_bdevs_list": [ 00:14:37.890 { 00:14:37.890 "name": null, 00:14:37.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.890 "is_configured": false, 00:14:37.890 "data_offset": 0, 00:14:37.890 "data_size": 7936 00:14:37.890 }, 00:14:37.890 { 00:14:37.890 "name": "BaseBdev2", 00:14:37.890 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:37.890 "is_configured": true, 00:14:37.890 "data_offset": 256, 00:14:37.890 "data_size": 7936 00:14:37.890 } 00:14:37.890 ] 00:14:37.890 }' 00:14:37.890 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.890 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.891 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.891 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.891 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:37.891 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.891 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:37.891 [2024-10-01 14:39:29.503754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.891 [2024-10-01 14:39:29.512747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:14:37.891 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.891 14:39:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:37.891 [2024-10-01 14:39:29.514582] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.275 "name": "raid_bdev1", 00:14:39.275 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:39.275 "strip_size_kb": 0, 00:14:39.275 "state": "online", 00:14:39.275 "raid_level": "raid1", 00:14:39.275 "superblock": true, 00:14:39.275 "num_base_bdevs": 2, 00:14:39.275 "num_base_bdevs_discovered": 2, 00:14:39.275 "num_base_bdevs_operational": 2, 00:14:39.275 "process": { 00:14:39.275 "type": "rebuild", 00:14:39.275 "target": "spare", 00:14:39.275 "progress": { 00:14:39.275 "blocks": 2560, 00:14:39.275 "percent": 32 00:14:39.275 } 00:14:39.275 }, 00:14:39.275 "base_bdevs_list": [ 00:14:39.275 { 00:14:39.275 "name": "spare", 00:14:39.275 "uuid": "cac7fa46-f797-582c-ab30-268f05cf74b8", 00:14:39.275 "is_configured": true, 00:14:39.275 "data_offset": 256, 00:14:39.275 "data_size": 7936 00:14:39.275 }, 00:14:39.275 { 00:14:39.275 "name": "BaseBdev2", 00:14:39.275 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:39.275 "is_configured": true, 00:14:39.275 "data_offset": 256, 00:14:39.275 "data_size": 7936 00:14:39.275 } 00:14:39.275 ] 00:14:39.275 }' 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:39.275 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=590 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.275 "name": "raid_bdev1", 00:14:39.275 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:39.275 "strip_size_kb": 0, 00:14:39.275 "state": "online", 00:14:39.275 "raid_level": "raid1", 00:14:39.275 "superblock": true, 00:14:39.275 "num_base_bdevs": 2, 00:14:39.275 "num_base_bdevs_discovered": 2, 00:14:39.275 "num_base_bdevs_operational": 2, 00:14:39.275 "process": { 00:14:39.275 "type": "rebuild", 00:14:39.275 "target": "spare", 00:14:39.275 "progress": { 00:14:39.275 "blocks": 2816, 00:14:39.275 "percent": 35 00:14:39.275 } 00:14:39.275 }, 00:14:39.275 "base_bdevs_list": [ 00:14:39.275 { 00:14:39.275 "name": "spare", 00:14:39.275 "uuid": "cac7fa46-f797-582c-ab30-268f05cf74b8", 00:14:39.275 "is_configured": true, 00:14:39.275 "data_offset": 256, 00:14:39.275 "data_size": 7936 00:14:39.275 }, 00:14:39.275 { 00:14:39.275 "name": "BaseBdev2", 00:14:39.275 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:39.275 "is_configured": true, 00:14:39.275 "data_offset": 256, 00:14:39.275 "data_size": 7936 00:14:39.275 } 00:14:39.275 ] 00:14:39.275 }' 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.275 14:39:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.213 "name": "raid_bdev1", 00:14:40.213 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:40.213 "strip_size_kb": 0, 00:14:40.213 "state": "online", 00:14:40.213 "raid_level": "raid1", 00:14:40.213 "superblock": true, 00:14:40.213 "num_base_bdevs": 2, 00:14:40.213 "num_base_bdevs_discovered": 2, 00:14:40.213 "num_base_bdevs_operational": 2, 00:14:40.213 "process": { 00:14:40.213 "type": "rebuild", 00:14:40.213 "target": "spare", 00:14:40.213 "progress": { 00:14:40.213 "blocks": 5632, 00:14:40.213 "percent": 70 00:14:40.213 } 00:14:40.213 }, 00:14:40.213 "base_bdevs_list": [ 00:14:40.213 { 00:14:40.213 "name": "spare", 00:14:40.213 "uuid": "cac7fa46-f797-582c-ab30-268f05cf74b8", 00:14:40.213 "is_configured": true, 00:14:40.213 "data_offset": 256, 00:14:40.213 "data_size": 7936 00:14:40.213 }, 00:14:40.213 { 00:14:40.213 "name": "BaseBdev2", 00:14:40.213 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:40.213 "is_configured": true, 00:14:40.213 "data_offset": 256, 00:14:40.213 "data_size": 7936 00:14:40.213 } 00:14:40.213 ] 00:14:40.213 }' 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.213 14:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.194 [2024-10-01 14:39:32.630356] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:41.194 [2024-10-01 14:39:32.630430] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:41.194 [2024-10-01 14:39:32.630539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.194 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.194 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.194 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.194 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.194 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.194 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.194 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.194 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.194 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.194 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.194 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.453 "name": "raid_bdev1", 00:14:41.453 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:41.453 "strip_size_kb": 0, 00:14:41.453 "state": "online", 00:14:41.453 "raid_level": "raid1", 00:14:41.453 "superblock": true, 00:14:41.453 "num_base_bdevs": 2, 00:14:41.453 "num_base_bdevs_discovered": 2, 00:14:41.453 "num_base_bdevs_operational": 2, 00:14:41.453 "base_bdevs_list": [ 00:14:41.453 { 00:14:41.453 "name": "spare", 00:14:41.453 "uuid": "cac7fa46-f797-582c-ab30-268f05cf74b8", 00:14:41.453 "is_configured": true, 00:14:41.453 "data_offset": 256, 00:14:41.453 "data_size": 7936 00:14:41.453 }, 00:14:41.453 { 00:14:41.453 "name": "BaseBdev2", 00:14:41.453 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:41.453 "is_configured": true, 00:14:41.453 "data_offset": 256, 00:14:41.453 "data_size": 7936 00:14:41.453 } 00:14:41.453 ] 00:14:41.453 }' 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.453 "name": "raid_bdev1", 00:14:41.453 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:41.453 "strip_size_kb": 0, 00:14:41.453 "state": "online", 00:14:41.453 "raid_level": "raid1", 00:14:41.453 "superblock": true, 00:14:41.453 "num_base_bdevs": 2, 00:14:41.453 "num_base_bdevs_discovered": 2, 00:14:41.453 "num_base_bdevs_operational": 2, 00:14:41.453 "base_bdevs_list": [ 00:14:41.453 { 00:14:41.453 "name": "spare", 00:14:41.453 "uuid": "cac7fa46-f797-582c-ab30-268f05cf74b8", 00:14:41.453 "is_configured": true, 00:14:41.453 "data_offset": 256, 00:14:41.453 "data_size": 7936 00:14:41.453 }, 00:14:41.453 { 00:14:41.453 "name": "BaseBdev2", 00:14:41.453 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:41.453 "is_configured": true, 00:14:41.453 "data_offset": 256, 00:14:41.453 "data_size": 7936 00:14:41.453 } 00:14:41.453 ] 00:14:41.453 }' 00:14:41.453 14:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.453 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.453 "name": "raid_bdev1", 00:14:41.453 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:41.453 "strip_size_kb": 0, 00:14:41.453 "state": "online", 00:14:41.453 "raid_level": "raid1", 00:14:41.454 "superblock": true, 00:14:41.454 "num_base_bdevs": 2, 00:14:41.454 "num_base_bdevs_discovered": 2, 00:14:41.454 "num_base_bdevs_operational": 2, 00:14:41.454 "base_bdevs_list": [ 00:14:41.454 { 00:14:41.454 "name": "spare", 00:14:41.454 "uuid": "cac7fa46-f797-582c-ab30-268f05cf74b8", 00:14:41.454 "is_configured": true, 00:14:41.454 "data_offset": 256, 00:14:41.454 "data_size": 7936 00:14:41.454 }, 00:14:41.454 { 00:14:41.454 "name": "BaseBdev2", 00:14:41.454 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:41.454 "is_configured": true, 00:14:41.454 "data_offset": 256, 00:14:41.454 "data_size": 7936 00:14:41.454 } 00:14:41.454 ] 00:14:41.454 }' 00:14:41.454 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.454 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.713 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.713 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.713 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.713 [2024-10-01 14:39:33.372722] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.713 [2024-10-01 14:39:33.372878] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.713 [2024-10-01 14:39:33.372962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.713 [2024-10-01 14:39:33.373028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.713 [2024-10-01 14:39:33.373039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:41.713 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.713 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.713 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.713 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.713 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:14:41.713 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:41.972 /dev/nbd0 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.972 1+0 records in 00:14:41.972 1+0 records out 00:14:41.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197148 s, 20.8 MB/s 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:14:41.972 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:42.233 /dev/nbd1 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:42.233 1+0 records in 00:14:42.233 1+0 records out 00:14:42.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345722 s, 11.8 MB/s 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:42.233 14:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:42.491 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:42.491 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:42.491 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:42.491 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:42.491 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:14:42.491 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:42.491 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:42.748 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:42.748 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:42.748 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:42.748 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:42.748 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:42.748 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:42.748 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:14:42.748 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:14:42.748 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:42.748 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.008 [2024-10-01 14:39:34.582579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:43.008 [2024-10-01 14:39:34.582654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.008 [2024-10-01 14:39:34.582676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:43.008 [2024-10-01 14:39:34.582686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.008 [2024-10-01 14:39:34.584741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.008 [2024-10-01 14:39:34.584886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:43.008 [2024-10-01 14:39:34.584974] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:43.008 [2024-10-01 14:39:34.585031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.008 [2024-10-01 14:39:34.585165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.008 spare 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.008 [2024-10-01 14:39:34.685251] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:43.008 [2024-10-01 14:39:34.685300] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:43.008 [2024-10-01 14:39:34.685419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:14:43.008 [2024-10-01 14:39:34.685575] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:43.008 [2024-10-01 14:39:34.685585] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:43.008 [2024-10-01 14:39:34.685734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.008 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.269 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.269 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.269 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.269 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.269 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.269 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.269 "name": "raid_bdev1", 00:14:43.269 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:43.269 "strip_size_kb": 0, 00:14:43.269 "state": "online", 00:14:43.269 "raid_level": "raid1", 00:14:43.269 "superblock": true, 00:14:43.269 "num_base_bdevs": 2, 00:14:43.269 "num_base_bdevs_discovered": 2, 00:14:43.269 "num_base_bdevs_operational": 2, 00:14:43.269 "base_bdevs_list": [ 00:14:43.269 { 00:14:43.269 "name": "spare", 00:14:43.269 "uuid": "cac7fa46-f797-582c-ab30-268f05cf74b8", 00:14:43.269 "is_configured": true, 00:14:43.269 "data_offset": 256, 00:14:43.269 "data_size": 7936 00:14:43.269 }, 00:14:43.269 { 00:14:43.269 "name": "BaseBdev2", 00:14:43.269 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:43.269 "is_configured": true, 00:14:43.269 "data_offset": 256, 00:14:43.269 "data_size": 7936 00:14:43.269 } 00:14:43.269 ] 00:14:43.269 }' 00:14:43.269 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.269 14:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.530 "name": "raid_bdev1", 00:14:43.530 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:43.530 "strip_size_kb": 0, 00:14:43.530 "state": "online", 00:14:43.530 "raid_level": "raid1", 00:14:43.530 "superblock": true, 00:14:43.530 "num_base_bdevs": 2, 00:14:43.530 "num_base_bdevs_discovered": 2, 00:14:43.530 "num_base_bdevs_operational": 2, 00:14:43.530 "base_bdevs_list": [ 00:14:43.530 { 00:14:43.530 "name": "spare", 00:14:43.530 "uuid": "cac7fa46-f797-582c-ab30-268f05cf74b8", 00:14:43.530 "is_configured": true, 00:14:43.530 "data_offset": 256, 00:14:43.530 "data_size": 7936 00:14:43.530 }, 00:14:43.530 { 00:14:43.530 "name": "BaseBdev2", 00:14:43.530 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:43.530 "is_configured": true, 00:14:43.530 "data_offset": 256, 00:14:43.530 "data_size": 7936 00:14:43.530 } 00:14:43.530 ] 00:14:43.530 }' 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.530 [2024-10-01 14:39:35.162780] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.530 "name": "raid_bdev1", 00:14:43.530 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:43.530 "strip_size_kb": 0, 00:14:43.530 "state": "online", 00:14:43.530 "raid_level": "raid1", 00:14:43.530 "superblock": true, 00:14:43.530 "num_base_bdevs": 2, 00:14:43.530 "num_base_bdevs_discovered": 1, 00:14:43.530 "num_base_bdevs_operational": 1, 00:14:43.530 "base_bdevs_list": [ 00:14:43.530 { 00:14:43.530 "name": null, 00:14:43.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.530 "is_configured": false, 00:14:43.530 "data_offset": 0, 00:14:43.530 "data_size": 7936 00:14:43.530 }, 00:14:43.530 { 00:14:43.530 "name": "BaseBdev2", 00:14:43.530 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:43.530 "is_configured": true, 00:14:43.530 "data_offset": 256, 00:14:43.530 "data_size": 7936 00:14:43.530 } 00:14:43.530 ] 00:14:43.530 }' 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.530 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:44.185 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:44.185 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.185 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:44.185 [2024-10-01 14:39:35.490899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.185 [2024-10-01 14:39:35.491097] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:44.185 [2024-10-01 14:39:35.491115] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:44.185 [2024-10-01 14:39:35.491155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.185 [2024-10-01 14:39:35.500349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:14:44.185 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.185 14:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:44.185 [2024-10-01 14:39:35.502519] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.128 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.128 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.128 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.128 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.128 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.128 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.128 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.128 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:45.128 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.128 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.128 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.128 "name": "raid_bdev1", 00:14:45.128 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:45.128 "strip_size_kb": 0, 00:14:45.128 "state": "online", 00:14:45.128 "raid_level": "raid1", 00:14:45.128 "superblock": true, 00:14:45.128 "num_base_bdevs": 2, 00:14:45.128 "num_base_bdevs_discovered": 2, 00:14:45.128 "num_base_bdevs_operational": 2, 00:14:45.128 "process": { 00:14:45.128 "type": "rebuild", 00:14:45.128 "target": "spare", 00:14:45.128 "progress": { 00:14:45.128 "blocks": 2560, 00:14:45.128 "percent": 32 00:14:45.128 } 00:14:45.128 }, 00:14:45.128 "base_bdevs_list": [ 00:14:45.128 { 00:14:45.128 "name": "spare", 00:14:45.128 "uuid": "cac7fa46-f797-582c-ab30-268f05cf74b8", 00:14:45.128 "is_configured": true, 00:14:45.128 "data_offset": 256, 00:14:45.128 "data_size": 7936 00:14:45.128 }, 00:14:45.129 { 00:14:45.129 "name": "BaseBdev2", 00:14:45.129 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:45.129 "is_configured": true, 00:14:45.129 "data_offset": 256, 00:14:45.129 "data_size": 7936 00:14:45.129 } 00:14:45.129 ] 00:14:45.129 }' 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:45.129 [2024-10-01 14:39:36.608713] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.129 [2024-10-01 14:39:36.609081] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:45.129 [2024-10-01 14:39:36.609139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.129 [2024-10-01 14:39:36.609161] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.129 [2024-10-01 14:39:36.609171] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.129 "name": "raid_bdev1", 00:14:45.129 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:45.129 "strip_size_kb": 0, 00:14:45.129 "state": "online", 00:14:45.129 "raid_level": "raid1", 00:14:45.129 "superblock": true, 00:14:45.129 "num_base_bdevs": 2, 00:14:45.129 "num_base_bdevs_discovered": 1, 00:14:45.129 "num_base_bdevs_operational": 1, 00:14:45.129 "base_bdevs_list": [ 00:14:45.129 { 00:14:45.129 "name": null, 00:14:45.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.129 "is_configured": false, 00:14:45.129 "data_offset": 0, 00:14:45.129 "data_size": 7936 00:14:45.129 }, 00:14:45.129 { 00:14:45.129 "name": "BaseBdev2", 00:14:45.129 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:45.129 "is_configured": true, 00:14:45.129 "data_offset": 256, 00:14:45.129 "data_size": 7936 00:14:45.129 } 00:14:45.129 ] 00:14:45.129 }' 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.129 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:45.390 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:45.390 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.390 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:45.390 [2024-10-01 14:39:36.959500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:45.390 [2024-10-01 14:39:36.959759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.390 [2024-10-01 14:39:36.959794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:45.390 [2024-10-01 14:39:36.959807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.390 [2024-10-01 14:39:36.960063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.390 [2024-10-01 14:39:36.960079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:45.390 [2024-10-01 14:39:36.960141] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:45.390 [2024-10-01 14:39:36.960154] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:45.390 [2024-10-01 14:39:36.960171] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:45.390 [2024-10-01 14:39:36.960192] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.390 [2024-10-01 14:39:36.969233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:14:45.390 spare 00:14:45.390 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.390 14:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:45.390 [2024-10-01 14:39:36.971399] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:46.331 14:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.331 14:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.331 14:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.331 14:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.331 14:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.331 14:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.331 14:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.331 14:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.331 14:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:46.331 14:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.331 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.331 "name": "raid_bdev1", 00:14:46.331 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:46.331 "strip_size_kb": 0, 00:14:46.331 "state": "online", 00:14:46.331 "raid_level": "raid1", 00:14:46.331 "superblock": true, 00:14:46.331 "num_base_bdevs": 2, 00:14:46.331 "num_base_bdevs_discovered": 2, 00:14:46.331 "num_base_bdevs_operational": 2, 00:14:46.331 "process": { 00:14:46.331 "type": "rebuild", 00:14:46.331 "target": "spare", 00:14:46.331 "progress": { 00:14:46.331 "blocks": 2560, 00:14:46.331 "percent": 32 00:14:46.331 } 00:14:46.331 }, 00:14:46.331 "base_bdevs_list": [ 00:14:46.331 { 00:14:46.331 "name": "spare", 00:14:46.331 "uuid": "cac7fa46-f797-582c-ab30-268f05cf74b8", 00:14:46.331 "is_configured": true, 00:14:46.331 "data_offset": 256, 00:14:46.331 "data_size": 7936 00:14:46.331 }, 00:14:46.331 { 00:14:46.331 "name": "BaseBdev2", 00:14:46.331 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:46.331 "is_configured": true, 00:14:46.331 "data_offset": 256, 00:14:46.331 "data_size": 7936 00:14:46.331 } 00:14:46.331 ] 00:14:46.331 }' 00:14:46.331 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:46.591 [2024-10-01 14:39:38.089662] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.591 [2024-10-01 14:39:38.179497] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:46.591 [2024-10-01 14:39:38.179592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.591 [2024-10-01 14:39:38.179610] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.591 [2024-10-01 14:39:38.179617] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.591 "name": "raid_bdev1", 00:14:46.591 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:46.591 "strip_size_kb": 0, 00:14:46.591 "state": "online", 00:14:46.591 "raid_level": "raid1", 00:14:46.591 "superblock": true, 00:14:46.591 "num_base_bdevs": 2, 00:14:46.591 "num_base_bdevs_discovered": 1, 00:14:46.591 "num_base_bdevs_operational": 1, 00:14:46.591 "base_bdevs_list": [ 00:14:46.591 { 00:14:46.591 "name": null, 00:14:46.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.591 "is_configured": false, 00:14:46.591 "data_offset": 0, 00:14:46.591 "data_size": 7936 00:14:46.591 }, 00:14:46.591 { 00:14:46.591 "name": "BaseBdev2", 00:14:46.591 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:46.591 "is_configured": true, 00:14:46.591 "data_offset": 256, 00:14:46.591 "data_size": 7936 00:14:46.591 } 00:14:46.591 ] 00:14:46.591 }' 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.591 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:47.158 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.158 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.158 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.158 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.158 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.158 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.158 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.158 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:47.158 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.158 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.158 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.158 "name": "raid_bdev1", 00:14:47.158 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:47.158 "strip_size_kb": 0, 00:14:47.158 "state": "online", 00:14:47.158 "raid_level": "raid1", 00:14:47.158 "superblock": true, 00:14:47.158 "num_base_bdevs": 2, 00:14:47.158 "num_base_bdevs_discovered": 1, 00:14:47.158 "num_base_bdevs_operational": 1, 00:14:47.158 "base_bdevs_list": [ 00:14:47.158 { 00:14:47.158 "name": null, 00:14:47.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.158 "is_configured": false, 00:14:47.158 "data_offset": 0, 00:14:47.158 "data_size": 7936 00:14:47.159 }, 00:14:47.159 { 00:14:47.159 "name": "BaseBdev2", 00:14:47.159 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:47.159 "is_configured": true, 00:14:47.159 "data_offset": 256, 00:14:47.159 "data_size": 7936 00:14:47.159 } 00:14:47.159 ] 00:14:47.159 }' 00:14:47.159 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.159 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.159 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.159 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.159 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:47.159 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.159 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:47.159 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.159 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:47.159 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.159 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:47.159 [2024-10-01 14:39:38.685954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:47.159 [2024-10-01 14:39:38.686024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.159 [2024-10-01 14:39:38.686048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:47.159 [2024-10-01 14:39:38.686059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.159 [2024-10-01 14:39:38.686284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.159 [2024-10-01 14:39:38.686298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:47.159 [2024-10-01 14:39:38.686355] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:47.159 [2024-10-01 14:39:38.686367] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:47.159 [2024-10-01 14:39:38.686377] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:47.159 [2024-10-01 14:39:38.686387] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:47.159 BaseBdev1 00:14:47.159 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.159 14:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:48.117 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:48.117 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.117 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.117 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.117 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.117 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:48.117 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.117 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.117 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.118 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.118 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.118 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.118 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.118 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:48.118 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.118 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.118 "name": "raid_bdev1", 00:14:48.118 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:48.118 "strip_size_kb": 0, 00:14:48.118 "state": "online", 00:14:48.118 "raid_level": "raid1", 00:14:48.118 "superblock": true, 00:14:48.118 "num_base_bdevs": 2, 00:14:48.118 "num_base_bdevs_discovered": 1, 00:14:48.118 "num_base_bdevs_operational": 1, 00:14:48.118 "base_bdevs_list": [ 00:14:48.118 { 00:14:48.118 "name": null, 00:14:48.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.118 "is_configured": false, 00:14:48.118 "data_offset": 0, 00:14:48.118 "data_size": 7936 00:14:48.118 }, 00:14:48.118 { 00:14:48.118 "name": "BaseBdev2", 00:14:48.118 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:48.118 "is_configured": true, 00:14:48.118 "data_offset": 256, 00:14:48.118 "data_size": 7936 00:14:48.118 } 00:14:48.118 ] 00:14:48.118 }' 00:14:48.118 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.118 14:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:48.378 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.378 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.378 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.378 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.378 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.378 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.378 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.378 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.378 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.640 "name": "raid_bdev1", 00:14:48.640 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:48.640 "strip_size_kb": 0, 00:14:48.640 "state": "online", 00:14:48.640 "raid_level": "raid1", 00:14:48.640 "superblock": true, 00:14:48.640 "num_base_bdevs": 2, 00:14:48.640 "num_base_bdevs_discovered": 1, 00:14:48.640 "num_base_bdevs_operational": 1, 00:14:48.640 "base_bdevs_list": [ 00:14:48.640 { 00:14:48.640 "name": null, 00:14:48.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.640 "is_configured": false, 00:14:48.640 "data_offset": 0, 00:14:48.640 "data_size": 7936 00:14:48.640 }, 00:14:48.640 { 00:14:48.640 "name": "BaseBdev2", 00:14:48.640 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:48.640 "is_configured": true, 00:14:48.640 "data_offset": 256, 00:14:48.640 "data_size": 7936 00:14:48.640 } 00:14:48.640 ] 00:14:48.640 }' 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:48.640 [2024-10-01 14:39:40.154401] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.640 [2024-10-01 14:39:40.154547] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:48.640 [2024-10-01 14:39:40.154563] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:48.640 request: 00:14:48.640 { 00:14:48.640 "base_bdev": "BaseBdev1", 00:14:48.640 "raid_bdev": "raid_bdev1", 00:14:48.640 "method": "bdev_raid_add_base_bdev", 00:14:48.640 "req_id": 1 00:14:48.640 } 00:14:48.640 Got JSON-RPC error response 00:14:48.640 response: 00:14:48.640 { 00:14:48.640 "code": -22, 00:14:48.640 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:48.640 } 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:48.640 14:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.585 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.585 "name": "raid_bdev1", 00:14:49.585 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:49.585 "strip_size_kb": 0, 00:14:49.585 "state": "online", 00:14:49.585 "raid_level": "raid1", 00:14:49.585 "superblock": true, 00:14:49.585 "num_base_bdevs": 2, 00:14:49.585 "num_base_bdevs_discovered": 1, 00:14:49.585 "num_base_bdevs_operational": 1, 00:14:49.585 "base_bdevs_list": [ 00:14:49.585 { 00:14:49.585 "name": null, 00:14:49.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.585 "is_configured": false, 00:14:49.585 "data_offset": 0, 00:14:49.586 "data_size": 7936 00:14:49.586 }, 00:14:49.586 { 00:14:49.586 "name": "BaseBdev2", 00:14:49.586 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:49.586 "is_configured": true, 00:14:49.586 "data_offset": 256, 00:14:49.586 "data_size": 7936 00:14:49.586 } 00:14:49.586 ] 00:14:49.586 }' 00:14:49.586 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.586 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:49.847 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.847 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.847 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.847 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.847 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.847 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.847 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.847 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:49.847 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.847 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.847 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.847 "name": "raid_bdev1", 00:14:49.847 "uuid": "85e21040-6a08-481f-9134-b392808234b0", 00:14:49.847 "strip_size_kb": 0, 00:14:49.847 "state": "online", 00:14:49.847 "raid_level": "raid1", 00:14:49.847 "superblock": true, 00:14:49.847 "num_base_bdevs": 2, 00:14:49.847 "num_base_bdevs_discovered": 1, 00:14:49.847 "num_base_bdevs_operational": 1, 00:14:49.847 "base_bdevs_list": [ 00:14:49.847 { 00:14:49.847 "name": null, 00:14:49.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.847 "is_configured": false, 00:14:49.847 "data_offset": 0, 00:14:49.847 "data_size": 7936 00:14:49.847 }, 00:14:49.847 { 00:14:49.847 "name": "BaseBdev2", 00:14:49.847 "uuid": "276e9e63-4302-57c2-bdee-5fe2d4b31ef4", 00:14:49.847 "is_configured": true, 00:14:49.847 "data_offset": 256, 00:14:49.847 "data_size": 7936 00:14:49.847 } 00:14:49.847 ] 00:14:49.847 }' 00:14:49.847 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 85512 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 85512 ']' 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 85512 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85512 00:14:50.108 killing process with pid 85512 00:14:50.108 Received shutdown signal, test time was about 60.000000 seconds 00:14:50.108 00:14:50.108 Latency(us) 00:14:50.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.108 =================================================================================================================== 00:14:50.108 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85512' 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 85512 00:14:50.108 [2024-10-01 14:39:41.604445] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.108 14:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 85512 00:14:50.108 [2024-10-01 14:39:41.604567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.108 [2024-10-01 14:39:41.604613] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.108 [2024-10-01 14:39:41.604625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:50.368 [2024-10-01 14:39:41.808976] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.333 14:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:14:51.333 00:14:51.333 real 0m18.116s 00:14:51.333 user 0m22.778s 00:14:51.333 sys 0m2.151s 00:14:51.333 14:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:51.333 14:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:51.333 ************************************ 00:14:51.333 END TEST raid_rebuild_test_sb_md_separate 00:14:51.333 ************************************ 00:14:51.333 14:39:42 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:14:51.333 14:39:42 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:14:51.333 14:39:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:51.333 14:39:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:51.333 14:39:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.333 ************************************ 00:14:51.333 START TEST raid_state_function_test_sb_md_interleaved 00:14:51.333 ************************************ 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:51.333 Process raid pid: 86186 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=86186 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86186' 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 86186 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 86186 ']' 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:51.333 14:39:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:51.333 [2024-10-01 14:39:42.803922] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:14:51.333 [2024-10-01 14:39:42.804072] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.333 [2024-10-01 14:39:42.962141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.596 [2024-10-01 14:39:43.215124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.857 [2024-10-01 14:39:43.376586] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.857 [2024-10-01 14:39:43.376642] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:52.124 [2024-10-01 14:39:43.704088] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.124 [2024-10-01 14:39:43.704163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.124 [2024-10-01 14:39:43.704176] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.124 [2024-10-01 14:39:43.704186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.124 "name": "Existed_Raid", 00:14:52.124 "uuid": "0afb8eb8-28cb-469e-8d6a-abe5ce506eb3", 00:14:52.124 "strip_size_kb": 0, 00:14:52.124 "state": "configuring", 00:14:52.124 "raid_level": "raid1", 00:14:52.124 "superblock": true, 00:14:52.124 "num_base_bdevs": 2, 00:14:52.124 "num_base_bdevs_discovered": 0, 00:14:52.124 "num_base_bdevs_operational": 2, 00:14:52.124 "base_bdevs_list": [ 00:14:52.124 { 00:14:52.124 "name": "BaseBdev1", 00:14:52.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.124 "is_configured": false, 00:14:52.124 "data_offset": 0, 00:14:52.124 "data_size": 0 00:14:52.124 }, 00:14:52.124 { 00:14:52.124 "name": "BaseBdev2", 00:14:52.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.124 "is_configured": false, 00:14:52.124 "data_offset": 0, 00:14:52.124 "data_size": 0 00:14:52.124 } 00:14:52.124 ] 00:14:52.124 }' 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.124 14:39:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:52.386 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:52.386 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.386 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:52.386 [2024-10-01 14:39:44.040061] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.386 [2024-10-01 14:39:44.040116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:52.386 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.386 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:52.386 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.386 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:52.386 [2024-10-01 14:39:44.048092] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.386 [2024-10-01 14:39:44.048146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.386 [2024-10-01 14:39:44.048154] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.386 [2024-10-01 14:39:44.048169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.386 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.386 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:14:52.386 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.386 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:52.647 [2024-10-01 14:39:44.098638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.647 BaseBdev1 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:52.647 [ 00:14:52.647 { 00:14:52.647 "name": "BaseBdev1", 00:14:52.647 "aliases": [ 00:14:52.647 "1f9453d0-b16e-435f-8443-2d2d1922b60b" 00:14:52.647 ], 00:14:52.647 "product_name": "Malloc disk", 00:14:52.647 "block_size": 4128, 00:14:52.647 "num_blocks": 8192, 00:14:52.647 "uuid": "1f9453d0-b16e-435f-8443-2d2d1922b60b", 00:14:52.647 "md_size": 32, 00:14:52.647 "md_interleave": true, 00:14:52.647 "dif_type": 0, 00:14:52.647 "assigned_rate_limits": { 00:14:52.647 "rw_ios_per_sec": 0, 00:14:52.647 "rw_mbytes_per_sec": 0, 00:14:52.647 "r_mbytes_per_sec": 0, 00:14:52.647 "w_mbytes_per_sec": 0 00:14:52.647 }, 00:14:52.647 "claimed": true, 00:14:52.647 "claim_type": "exclusive_write", 00:14:52.647 "zoned": false, 00:14:52.647 "supported_io_types": { 00:14:52.647 "read": true, 00:14:52.647 "write": true, 00:14:52.647 "unmap": true, 00:14:52.647 "flush": true, 00:14:52.647 "reset": true, 00:14:52.647 "nvme_admin": false, 00:14:52.647 "nvme_io": false, 00:14:52.647 "nvme_io_md": false, 00:14:52.647 "write_zeroes": true, 00:14:52.647 "zcopy": true, 00:14:52.647 "get_zone_info": false, 00:14:52.647 "zone_management": false, 00:14:52.647 "zone_append": false, 00:14:52.647 "compare": false, 00:14:52.647 "compare_and_write": false, 00:14:52.647 "abort": true, 00:14:52.647 "seek_hole": false, 00:14:52.647 "seek_data": false, 00:14:52.647 "copy": true, 00:14:52.647 "nvme_iov_md": false 00:14:52.647 }, 00:14:52.647 "memory_domains": [ 00:14:52.647 { 00:14:52.647 "dma_device_id": "system", 00:14:52.647 "dma_device_type": 1 00:14:52.647 }, 00:14:52.647 { 00:14:52.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.647 "dma_device_type": 2 00:14:52.647 } 00:14:52.647 ], 00:14:52.647 "driver_specific": {} 00:14:52.647 } 00:14:52.647 ] 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.647 "name": "Existed_Raid", 00:14:52.647 "uuid": "39922b15-fcd6-449b-aa4b-c584a44f35b7", 00:14:52.647 "strip_size_kb": 0, 00:14:52.647 "state": "configuring", 00:14:52.647 "raid_level": "raid1", 00:14:52.647 "superblock": true, 00:14:52.647 "num_base_bdevs": 2, 00:14:52.647 "num_base_bdevs_discovered": 1, 00:14:52.647 "num_base_bdevs_operational": 2, 00:14:52.647 "base_bdevs_list": [ 00:14:52.647 { 00:14:52.647 "name": "BaseBdev1", 00:14:52.647 "uuid": "1f9453d0-b16e-435f-8443-2d2d1922b60b", 00:14:52.647 "is_configured": true, 00:14:52.647 "data_offset": 256, 00:14:52.647 "data_size": 7936 00:14:52.647 }, 00:14:52.647 { 00:14:52.647 "name": "BaseBdev2", 00:14:52.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.647 "is_configured": false, 00:14:52.647 "data_offset": 0, 00:14:52.647 "data_size": 0 00:14:52.647 } 00:14:52.647 ] 00:14:52.647 }' 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.647 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:52.907 [2024-10-01 14:39:44.466845] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.907 [2024-10-01 14:39:44.466917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:52.907 [2024-10-01 14:39:44.474873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.907 [2024-10-01 14:39:44.477076] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.907 [2024-10-01 14:39:44.477133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.907 "name": "Existed_Raid", 00:14:52.907 "uuid": "4644efcf-50dd-4734-ad4c-fe8275df372e", 00:14:52.907 "strip_size_kb": 0, 00:14:52.907 "state": "configuring", 00:14:52.907 "raid_level": "raid1", 00:14:52.907 "superblock": true, 00:14:52.907 "num_base_bdevs": 2, 00:14:52.907 "num_base_bdevs_discovered": 1, 00:14:52.907 "num_base_bdevs_operational": 2, 00:14:52.907 "base_bdevs_list": [ 00:14:52.907 { 00:14:52.907 "name": "BaseBdev1", 00:14:52.907 "uuid": "1f9453d0-b16e-435f-8443-2d2d1922b60b", 00:14:52.907 "is_configured": true, 00:14:52.907 "data_offset": 256, 00:14:52.907 "data_size": 7936 00:14:52.907 }, 00:14:52.907 { 00:14:52.907 "name": "BaseBdev2", 00:14:52.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.907 "is_configured": false, 00:14:52.907 "data_offset": 0, 00:14:52.907 "data_size": 0 00:14:52.907 } 00:14:52.907 ] 00:14:52.907 }' 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.907 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:53.168 [2024-10-01 14:39:44.830178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.168 [2024-10-01 14:39:44.830447] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:53.168 [2024-10-01 14:39:44.830462] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:14:53.168 [2024-10-01 14:39:44.830558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:53.168 [2024-10-01 14:39:44.830634] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:53.168 [2024-10-01 14:39:44.830646] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:53.168 [2024-10-01 14:39:44.830743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.168 BaseBdev2 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.168 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:53.168 [ 00:14:53.168 { 00:14:53.168 "name": "BaseBdev2", 00:14:53.168 "aliases": [ 00:14:53.168 "01b6ca74-997c-4d20-9984-9b0ca32cac8d" 00:14:53.168 ], 00:14:53.432 "product_name": "Malloc disk", 00:14:53.432 "block_size": 4128, 00:14:53.432 "num_blocks": 8192, 00:14:53.432 "uuid": "01b6ca74-997c-4d20-9984-9b0ca32cac8d", 00:14:53.432 "md_size": 32, 00:14:53.432 "md_interleave": true, 00:14:53.432 "dif_type": 0, 00:14:53.432 "assigned_rate_limits": { 00:14:53.432 "rw_ios_per_sec": 0, 00:14:53.432 "rw_mbytes_per_sec": 0, 00:14:53.432 "r_mbytes_per_sec": 0, 00:14:53.432 "w_mbytes_per_sec": 0 00:14:53.432 }, 00:14:53.432 "claimed": true, 00:14:53.432 "claim_type": "exclusive_write", 00:14:53.432 "zoned": false, 00:14:53.432 "supported_io_types": { 00:14:53.432 "read": true, 00:14:53.432 "write": true, 00:14:53.432 "unmap": true, 00:14:53.432 "flush": true, 00:14:53.432 "reset": true, 00:14:53.432 "nvme_admin": false, 00:14:53.432 "nvme_io": false, 00:14:53.432 "nvme_io_md": false, 00:14:53.432 "write_zeroes": true, 00:14:53.432 "zcopy": true, 00:14:53.432 "get_zone_info": false, 00:14:53.432 "zone_management": false, 00:14:53.432 "zone_append": false, 00:14:53.432 "compare": false, 00:14:53.432 "compare_and_write": false, 00:14:53.432 "abort": true, 00:14:53.432 "seek_hole": false, 00:14:53.432 "seek_data": false, 00:14:53.432 "copy": true, 00:14:53.432 "nvme_iov_md": false 00:14:53.432 }, 00:14:53.432 "memory_domains": [ 00:14:53.432 { 00:14:53.432 "dma_device_id": "system", 00:14:53.432 "dma_device_type": 1 00:14:53.432 }, 00:14:53.432 { 00:14:53.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.432 "dma_device_type": 2 00:14:53.432 } 00:14:53.432 ], 00:14:53.432 "driver_specific": {} 00:14:53.432 } 00:14:53.432 ] 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.432 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.432 "name": "Existed_Raid", 00:14:53.432 "uuid": "4644efcf-50dd-4734-ad4c-fe8275df372e", 00:14:53.432 "strip_size_kb": 0, 00:14:53.432 "state": "online", 00:14:53.432 "raid_level": "raid1", 00:14:53.432 "superblock": true, 00:14:53.432 "num_base_bdevs": 2, 00:14:53.433 "num_base_bdevs_discovered": 2, 00:14:53.433 "num_base_bdevs_operational": 2, 00:14:53.433 "base_bdevs_list": [ 00:14:53.433 { 00:14:53.433 "name": "BaseBdev1", 00:14:53.433 "uuid": "1f9453d0-b16e-435f-8443-2d2d1922b60b", 00:14:53.433 "is_configured": true, 00:14:53.433 "data_offset": 256, 00:14:53.433 "data_size": 7936 00:14:53.433 }, 00:14:53.433 { 00:14:53.433 "name": "BaseBdev2", 00:14:53.433 "uuid": "01b6ca74-997c-4d20-9984-9b0ca32cac8d", 00:14:53.433 "is_configured": true, 00:14:53.433 "data_offset": 256, 00:14:53.433 "data_size": 7936 00:14:53.433 } 00:14:53.433 ] 00:14:53.433 }' 00:14:53.433 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.433 14:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:53.694 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:53.694 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:53.694 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.694 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.694 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.694 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.694 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.694 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:53.694 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.694 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:53.694 [2024-10-01 14:39:45.190743] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.694 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.694 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.694 "name": "Existed_Raid", 00:14:53.694 "aliases": [ 00:14:53.694 "4644efcf-50dd-4734-ad4c-fe8275df372e" 00:14:53.694 ], 00:14:53.694 "product_name": "Raid Volume", 00:14:53.694 "block_size": 4128, 00:14:53.694 "num_blocks": 7936, 00:14:53.694 "uuid": "4644efcf-50dd-4734-ad4c-fe8275df372e", 00:14:53.694 "md_size": 32, 00:14:53.694 "md_interleave": true, 00:14:53.694 "dif_type": 0, 00:14:53.694 "assigned_rate_limits": { 00:14:53.694 "rw_ios_per_sec": 0, 00:14:53.694 "rw_mbytes_per_sec": 0, 00:14:53.694 "r_mbytes_per_sec": 0, 00:14:53.694 "w_mbytes_per_sec": 0 00:14:53.694 }, 00:14:53.694 "claimed": false, 00:14:53.694 "zoned": false, 00:14:53.694 "supported_io_types": { 00:14:53.694 "read": true, 00:14:53.694 "write": true, 00:14:53.694 "unmap": false, 00:14:53.694 "flush": false, 00:14:53.694 "reset": true, 00:14:53.694 "nvme_admin": false, 00:14:53.694 "nvme_io": false, 00:14:53.694 "nvme_io_md": false, 00:14:53.694 "write_zeroes": true, 00:14:53.694 "zcopy": false, 00:14:53.694 "get_zone_info": false, 00:14:53.694 "zone_management": false, 00:14:53.694 "zone_append": false, 00:14:53.694 "compare": false, 00:14:53.694 "compare_and_write": false, 00:14:53.694 "abort": false, 00:14:53.694 "seek_hole": false, 00:14:53.694 "seek_data": false, 00:14:53.694 "copy": false, 00:14:53.694 "nvme_iov_md": false 00:14:53.694 }, 00:14:53.694 "memory_domains": [ 00:14:53.694 { 00:14:53.694 "dma_device_id": "system", 00:14:53.694 "dma_device_type": 1 00:14:53.694 }, 00:14:53.694 { 00:14:53.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.694 "dma_device_type": 2 00:14:53.694 }, 00:14:53.694 { 00:14:53.694 "dma_device_id": "system", 00:14:53.694 "dma_device_type": 1 00:14:53.694 }, 00:14:53.695 { 00:14:53.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.695 "dma_device_type": 2 00:14:53.695 } 00:14:53.695 ], 00:14:53.695 "driver_specific": { 00:14:53.695 "raid": { 00:14:53.695 "uuid": "4644efcf-50dd-4734-ad4c-fe8275df372e", 00:14:53.695 "strip_size_kb": 0, 00:14:53.695 "state": "online", 00:14:53.695 "raid_level": "raid1", 00:14:53.695 "superblock": true, 00:14:53.695 "num_base_bdevs": 2, 00:14:53.695 "num_base_bdevs_discovered": 2, 00:14:53.695 "num_base_bdevs_operational": 2, 00:14:53.695 "base_bdevs_list": [ 00:14:53.695 { 00:14:53.695 "name": "BaseBdev1", 00:14:53.695 "uuid": "1f9453d0-b16e-435f-8443-2d2d1922b60b", 00:14:53.695 "is_configured": true, 00:14:53.695 "data_offset": 256, 00:14:53.695 "data_size": 7936 00:14:53.695 }, 00:14:53.695 { 00:14:53.695 "name": "BaseBdev2", 00:14:53.695 "uuid": "01b6ca74-997c-4d20-9984-9b0ca32cac8d", 00:14:53.695 "is_configured": true, 00:14:53.695 "data_offset": 256, 00:14:53.695 "data_size": 7936 00:14:53.695 } 00:14:53.695 ] 00:14:53.695 } 00:14:53.695 } 00:14:53.695 }' 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:53.695 BaseBdev2' 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.695 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:53.695 [2024-10-01 14:39:45.342507] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.956 "name": "Existed_Raid", 00:14:53.956 "uuid": "4644efcf-50dd-4734-ad4c-fe8275df372e", 00:14:53.956 "strip_size_kb": 0, 00:14:53.956 "state": "online", 00:14:53.956 "raid_level": "raid1", 00:14:53.956 "superblock": true, 00:14:53.956 "num_base_bdevs": 2, 00:14:53.956 "num_base_bdevs_discovered": 1, 00:14:53.956 "num_base_bdevs_operational": 1, 00:14:53.956 "base_bdevs_list": [ 00:14:53.956 { 00:14:53.956 "name": null, 00:14:53.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.956 "is_configured": false, 00:14:53.956 "data_offset": 0, 00:14:53.956 "data_size": 7936 00:14:53.956 }, 00:14:53.956 { 00:14:53.956 "name": "BaseBdev2", 00:14:53.956 "uuid": "01b6ca74-997c-4d20-9984-9b0ca32cac8d", 00:14:53.956 "is_configured": true, 00:14:53.956 "data_offset": 256, 00:14:53.956 "data_size": 7936 00:14:53.956 } 00:14:53.956 ] 00:14:53.956 }' 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.956 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:54.219 [2024-10-01 14:39:45.772859] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:54.219 [2024-10-01 14:39:45.772996] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.219 [2024-10-01 14:39:45.838230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.219 [2024-10-01 14:39:45.838310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.219 [2024-10-01 14:39:45.838324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 86186 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 86186 ']' 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 86186 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:54.219 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86186 00:14:54.481 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:54.481 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:54.481 killing process with pid 86186 00:14:54.481 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86186' 00:14:54.481 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 86186 00:14:54.481 14:39:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 86186 00:14:54.481 [2024-10-01 14:39:45.904490] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:54.481 [2024-10-01 14:39:45.916335] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.425 14:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:14:55.425 00:14:55.425 real 0m4.109s 00:14:55.425 user 0m5.720s 00:14:55.425 sys 0m0.731s 00:14:55.425 14:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:55.425 14:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:55.425 ************************************ 00:14:55.425 END TEST raid_state_function_test_sb_md_interleaved 00:14:55.425 ************************************ 00:14:55.425 14:39:46 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:14:55.425 14:39:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:55.425 14:39:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:55.425 14:39:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.425 ************************************ 00:14:55.425 START TEST raid_superblock_test_md_interleaved 00:14:55.425 ************************************ 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=86427 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 86427 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 86427 ']' 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:55.425 14:39:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:55.425 [2024-10-01 14:39:46.988309] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:14:55.425 [2024-10-01 14:39:46.988467] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86427 ] 00:14:55.685 [2024-10-01 14:39:47.143368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.946 [2024-10-01 14:39:47.397156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.946 [2024-10-01 14:39:47.556689] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.946 [2024-10-01 14:39:47.556785] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.209 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:56.209 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:14:56.209 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:56.209 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.209 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:56.209 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:56.209 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:56.209 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.209 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.209 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.209 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:14:56.209 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.209 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:56.512 malloc1 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:56.512 [2024-10-01 14:39:47.904478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:56.512 [2024-10-01 14:39:47.904553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.512 [2024-10-01 14:39:47.904581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:56.512 [2024-10-01 14:39:47.904592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.512 [2024-10-01 14:39:47.906822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.512 [2024-10-01 14:39:47.906866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:56.512 pt1 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.512 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:56.512 malloc2 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:56.513 [2024-10-01 14:39:47.959433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:56.513 [2024-10-01 14:39:47.959512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.513 [2024-10-01 14:39:47.959538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:56.513 [2024-10-01 14:39:47.959549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.513 [2024-10-01 14:39:47.961765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.513 [2024-10-01 14:39:47.961808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:56.513 pt2 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:56.513 [2024-10-01 14:39:47.967514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:56.513 [2024-10-01 14:39:47.969658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.513 [2024-10-01 14:39:47.969910] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:56.513 [2024-10-01 14:39:47.969939] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:14:56.513 [2024-10-01 14:39:47.970043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:56.513 [2024-10-01 14:39:47.970121] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:56.513 [2024-10-01 14:39:47.970135] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:56.513 [2024-10-01 14:39:47.970240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.513 14:39:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.513 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.513 "name": "raid_bdev1", 00:14:56.513 "uuid": "2c9080b7-d7b7-4627-97dd-a31b4e89654f", 00:14:56.513 "strip_size_kb": 0, 00:14:56.513 "state": "online", 00:14:56.513 "raid_level": "raid1", 00:14:56.513 "superblock": true, 00:14:56.513 "num_base_bdevs": 2, 00:14:56.513 "num_base_bdevs_discovered": 2, 00:14:56.513 "num_base_bdevs_operational": 2, 00:14:56.513 "base_bdevs_list": [ 00:14:56.513 { 00:14:56.513 "name": "pt1", 00:14:56.513 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.513 "is_configured": true, 00:14:56.513 "data_offset": 256, 00:14:56.513 "data_size": 7936 00:14:56.513 }, 00:14:56.513 { 00:14:56.513 "name": "pt2", 00:14:56.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.513 "is_configured": true, 00:14:56.513 "data_offset": 256, 00:14:56.513 "data_size": 7936 00:14:56.513 } 00:14:56.513 ] 00:14:56.513 }' 00:14:56.513 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.513 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:56.775 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:56.775 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:56.775 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:56.775 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:56.775 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:14:56.775 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:56.775 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:56.775 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.775 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:56.775 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:56.775 [2024-10-01 14:39:48.299889] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.775 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.775 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:56.775 "name": "raid_bdev1", 00:14:56.775 "aliases": [ 00:14:56.775 "2c9080b7-d7b7-4627-97dd-a31b4e89654f" 00:14:56.775 ], 00:14:56.775 "product_name": "Raid Volume", 00:14:56.775 "block_size": 4128, 00:14:56.775 "num_blocks": 7936, 00:14:56.775 "uuid": "2c9080b7-d7b7-4627-97dd-a31b4e89654f", 00:14:56.775 "md_size": 32, 00:14:56.775 "md_interleave": true, 00:14:56.775 "dif_type": 0, 00:14:56.775 "assigned_rate_limits": { 00:14:56.775 "rw_ios_per_sec": 0, 00:14:56.775 "rw_mbytes_per_sec": 0, 00:14:56.775 "r_mbytes_per_sec": 0, 00:14:56.775 "w_mbytes_per_sec": 0 00:14:56.775 }, 00:14:56.775 "claimed": false, 00:14:56.775 "zoned": false, 00:14:56.775 "supported_io_types": { 00:14:56.775 "read": true, 00:14:56.775 "write": true, 00:14:56.775 "unmap": false, 00:14:56.775 "flush": false, 00:14:56.775 "reset": true, 00:14:56.775 "nvme_admin": false, 00:14:56.775 "nvme_io": false, 00:14:56.775 "nvme_io_md": false, 00:14:56.775 "write_zeroes": true, 00:14:56.775 "zcopy": false, 00:14:56.775 "get_zone_info": false, 00:14:56.775 "zone_management": false, 00:14:56.775 "zone_append": false, 00:14:56.775 "compare": false, 00:14:56.775 "compare_and_write": false, 00:14:56.775 "abort": false, 00:14:56.776 "seek_hole": false, 00:14:56.776 "seek_data": false, 00:14:56.776 "copy": false, 00:14:56.776 "nvme_iov_md": false 00:14:56.776 }, 00:14:56.776 "memory_domains": [ 00:14:56.776 { 00:14:56.776 "dma_device_id": "system", 00:14:56.776 "dma_device_type": 1 00:14:56.776 }, 00:14:56.776 { 00:14:56.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.776 "dma_device_type": 2 00:14:56.776 }, 00:14:56.776 { 00:14:56.776 "dma_device_id": "system", 00:14:56.776 "dma_device_type": 1 00:14:56.776 }, 00:14:56.776 { 00:14:56.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.776 "dma_device_type": 2 00:14:56.776 } 00:14:56.776 ], 00:14:56.776 "driver_specific": { 00:14:56.776 "raid": { 00:14:56.776 "uuid": "2c9080b7-d7b7-4627-97dd-a31b4e89654f", 00:14:56.776 "strip_size_kb": 0, 00:14:56.776 "state": "online", 00:14:56.776 "raid_level": "raid1", 00:14:56.776 "superblock": true, 00:14:56.776 "num_base_bdevs": 2, 00:14:56.776 "num_base_bdevs_discovered": 2, 00:14:56.776 "num_base_bdevs_operational": 2, 00:14:56.776 "base_bdevs_list": [ 00:14:56.776 { 00:14:56.776 "name": "pt1", 00:14:56.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.776 "is_configured": true, 00:14:56.776 "data_offset": 256, 00:14:56.776 "data_size": 7936 00:14:56.776 }, 00:14:56.776 { 00:14:56.776 "name": "pt2", 00:14:56.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.776 "is_configured": true, 00:14:56.776 "data_offset": 256, 00:14:56.776 "data_size": 7936 00:14:56.776 } 00:14:56.776 ] 00:14:56.776 } 00:14:56.776 } 00:14:56.776 }' 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:56.776 pt2' 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:14:56.776 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:14:57.038 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.038 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.038 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:57.038 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.038 [2024-10-01 14:39:48.463907] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.038 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.038 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2c9080b7-d7b7-4627-97dd-a31b4e89654f 00:14:57.038 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 2c9080b7-d7b7-4627-97dd-a31b4e89654f ']' 00:14:57.038 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:57.038 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.038 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.038 [2024-10-01 14:39:48.495543] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.038 [2024-10-01 14:39:48.495583] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.039 [2024-10-01 14:39:48.495684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.039 [2024-10-01 14:39:48.495776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.039 [2024-10-01 14:39:48.495791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.039 [2024-10-01 14:39:48.595645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:57.039 [2024-10-01 14:39:48.597899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:57.039 [2024-10-01 14:39:48.598007] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:57.039 [2024-10-01 14:39:48.598065] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:57.039 [2024-10-01 14:39:48.598081] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.039 [2024-10-01 14:39:48.598093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:57.039 request: 00:14:57.039 { 00:14:57.039 "name": "raid_bdev1", 00:14:57.039 "raid_level": "raid1", 00:14:57.039 "base_bdevs": [ 00:14:57.039 "malloc1", 00:14:57.039 "malloc2" 00:14:57.039 ], 00:14:57.039 "superblock": false, 00:14:57.039 "method": "bdev_raid_create", 00:14:57.039 "req_id": 1 00:14:57.039 } 00:14:57.039 Got JSON-RPC error response 00:14:57.039 response: 00:14:57.039 { 00:14:57.039 "code": -17, 00:14:57.039 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:57.039 } 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.039 [2024-10-01 14:39:48.643602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:57.039 [2024-10-01 14:39:48.643681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.039 [2024-10-01 14:39:48.643701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:57.039 [2024-10-01 14:39:48.643727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.039 [2024-10-01 14:39:48.646026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.039 [2024-10-01 14:39:48.646076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:57.039 [2024-10-01 14:39:48.646146] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:57.039 [2024-10-01 14:39:48.646245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:57.039 pt1 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.039 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.040 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.040 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.040 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.040 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.040 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.040 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.040 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.040 "name": "raid_bdev1", 00:14:57.040 "uuid": "2c9080b7-d7b7-4627-97dd-a31b4e89654f", 00:14:57.040 "strip_size_kb": 0, 00:14:57.040 "state": "configuring", 00:14:57.040 "raid_level": "raid1", 00:14:57.040 "superblock": true, 00:14:57.040 "num_base_bdevs": 2, 00:14:57.040 "num_base_bdevs_discovered": 1, 00:14:57.040 "num_base_bdevs_operational": 2, 00:14:57.040 "base_bdevs_list": [ 00:14:57.040 { 00:14:57.040 "name": "pt1", 00:14:57.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.040 "is_configured": true, 00:14:57.040 "data_offset": 256, 00:14:57.040 "data_size": 7936 00:14:57.040 }, 00:14:57.040 { 00:14:57.040 "name": null, 00:14:57.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.040 "is_configured": false, 00:14:57.040 "data_offset": 256, 00:14:57.040 "data_size": 7936 00:14:57.040 } 00:14:57.040 ] 00:14:57.040 }' 00:14:57.040 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.040 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.302 [2024-10-01 14:39:48.959665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:57.302 [2024-10-01 14:39:48.959762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.302 [2024-10-01 14:39:48.959785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:57.302 [2024-10-01 14:39:48.959797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.302 [2024-10-01 14:39:48.959988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.302 [2024-10-01 14:39:48.960006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:57.302 [2024-10-01 14:39:48.960062] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:57.302 [2024-10-01 14:39:48.960093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:57.302 [2024-10-01 14:39:48.960193] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:57.302 [2024-10-01 14:39:48.960204] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:14:57.302 [2024-10-01 14:39:48.960275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:57.302 [2024-10-01 14:39:48.960340] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:57.302 [2024-10-01 14:39:48.960348] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:57.302 [2024-10-01 14:39:48.960416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.302 pt2 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.302 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.562 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.562 "name": "raid_bdev1", 00:14:57.562 "uuid": "2c9080b7-d7b7-4627-97dd-a31b4e89654f", 00:14:57.562 "strip_size_kb": 0, 00:14:57.562 "state": "online", 00:14:57.562 "raid_level": "raid1", 00:14:57.562 "superblock": true, 00:14:57.562 "num_base_bdevs": 2, 00:14:57.562 "num_base_bdevs_discovered": 2, 00:14:57.562 "num_base_bdevs_operational": 2, 00:14:57.562 "base_bdevs_list": [ 00:14:57.562 { 00:14:57.562 "name": "pt1", 00:14:57.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.562 "is_configured": true, 00:14:57.562 "data_offset": 256, 00:14:57.562 "data_size": 7936 00:14:57.562 }, 00:14:57.562 { 00:14:57.562 "name": "pt2", 00:14:57.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.562 "is_configured": true, 00:14:57.562 "data_offset": 256, 00:14:57.562 "data_size": 7936 00:14:57.562 } 00:14:57.562 ] 00:14:57.562 }' 00:14:57.562 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.562 14:39:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.822 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:57.822 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:57.822 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.822 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.822 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.822 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.822 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.822 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.822 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.822 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.822 [2024-10-01 14:39:49.304218] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.822 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.822 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.822 "name": "raid_bdev1", 00:14:57.822 "aliases": [ 00:14:57.822 "2c9080b7-d7b7-4627-97dd-a31b4e89654f" 00:14:57.822 ], 00:14:57.822 "product_name": "Raid Volume", 00:14:57.822 "block_size": 4128, 00:14:57.822 "num_blocks": 7936, 00:14:57.822 "uuid": "2c9080b7-d7b7-4627-97dd-a31b4e89654f", 00:14:57.822 "md_size": 32, 00:14:57.822 "md_interleave": true, 00:14:57.822 "dif_type": 0, 00:14:57.823 "assigned_rate_limits": { 00:14:57.823 "rw_ios_per_sec": 0, 00:14:57.823 "rw_mbytes_per_sec": 0, 00:14:57.823 "r_mbytes_per_sec": 0, 00:14:57.823 "w_mbytes_per_sec": 0 00:14:57.823 }, 00:14:57.823 "claimed": false, 00:14:57.823 "zoned": false, 00:14:57.823 "supported_io_types": { 00:14:57.823 "read": true, 00:14:57.823 "write": true, 00:14:57.823 "unmap": false, 00:14:57.823 "flush": false, 00:14:57.823 "reset": true, 00:14:57.823 "nvme_admin": false, 00:14:57.823 "nvme_io": false, 00:14:57.823 "nvme_io_md": false, 00:14:57.823 "write_zeroes": true, 00:14:57.823 "zcopy": false, 00:14:57.823 "get_zone_info": false, 00:14:57.823 "zone_management": false, 00:14:57.823 "zone_append": false, 00:14:57.823 "compare": false, 00:14:57.823 "compare_and_write": false, 00:14:57.823 "abort": false, 00:14:57.823 "seek_hole": false, 00:14:57.823 "seek_data": false, 00:14:57.823 "copy": false, 00:14:57.823 "nvme_iov_md": false 00:14:57.823 }, 00:14:57.823 "memory_domains": [ 00:14:57.823 { 00:14:57.823 "dma_device_id": "system", 00:14:57.823 "dma_device_type": 1 00:14:57.823 }, 00:14:57.823 { 00:14:57.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.823 "dma_device_type": 2 00:14:57.823 }, 00:14:57.823 { 00:14:57.823 "dma_device_id": "system", 00:14:57.823 "dma_device_type": 1 00:14:57.823 }, 00:14:57.823 { 00:14:57.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.823 "dma_device_type": 2 00:14:57.823 } 00:14:57.823 ], 00:14:57.823 "driver_specific": { 00:14:57.823 "raid": { 00:14:57.823 "uuid": "2c9080b7-d7b7-4627-97dd-a31b4e89654f", 00:14:57.823 "strip_size_kb": 0, 00:14:57.823 "state": "online", 00:14:57.823 "raid_level": "raid1", 00:14:57.823 "superblock": true, 00:14:57.823 "num_base_bdevs": 2, 00:14:57.823 "num_base_bdevs_discovered": 2, 00:14:57.823 "num_base_bdevs_operational": 2, 00:14:57.823 "base_bdevs_list": [ 00:14:57.823 { 00:14:57.823 "name": "pt1", 00:14:57.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.823 "is_configured": true, 00:14:57.823 "data_offset": 256, 00:14:57.823 "data_size": 7936 00:14:57.823 }, 00:14:57.823 { 00:14:57.823 "name": "pt2", 00:14:57.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.823 "is_configured": true, 00:14:57.823 "data_offset": 256, 00:14:57.823 "data_size": 7936 00:14:57.823 } 00:14:57.823 ] 00:14:57.823 } 00:14:57.823 } 00:14:57.823 }' 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:57.823 pt2' 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.823 [2024-10-01 14:39:49.480203] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 2c9080b7-d7b7-4627-97dd-a31b4e89654f '!=' 2c9080b7-d7b7-4627-97dd-a31b4e89654f ']' 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.823 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.823 [2024-10-01 14:39:49.503984] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.085 "name": "raid_bdev1", 00:14:58.085 "uuid": "2c9080b7-d7b7-4627-97dd-a31b4e89654f", 00:14:58.085 "strip_size_kb": 0, 00:14:58.085 "state": "online", 00:14:58.085 "raid_level": "raid1", 00:14:58.085 "superblock": true, 00:14:58.085 "num_base_bdevs": 2, 00:14:58.085 "num_base_bdevs_discovered": 1, 00:14:58.085 "num_base_bdevs_operational": 1, 00:14:58.085 "base_bdevs_list": [ 00:14:58.085 { 00:14:58.085 "name": null, 00:14:58.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.085 "is_configured": false, 00:14:58.085 "data_offset": 0, 00:14:58.085 "data_size": 7936 00:14:58.085 }, 00:14:58.085 { 00:14:58.085 "name": "pt2", 00:14:58.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.085 "is_configured": true, 00:14:58.085 "data_offset": 256, 00:14:58.085 "data_size": 7936 00:14:58.085 } 00:14:58.085 ] 00:14:58.085 }' 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.085 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:58.347 [2024-10-01 14:39:49.819973] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.347 [2024-10-01 14:39:49.820043] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.347 [2024-10-01 14:39:49.820167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.347 [2024-10-01 14:39:49.820240] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.347 [2024-10-01 14:39:49.820256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.347 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:58.347 [2024-10-01 14:39:49.875938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:58.347 [2024-10-01 14:39:49.876040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.347 [2024-10-01 14:39:49.876065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:58.347 [2024-10-01 14:39:49.876079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.347 [2024-10-01 14:39:49.878809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.347 [2024-10-01 14:39:49.878868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:58.347 [2024-10-01 14:39:49.878950] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:58.347 [2024-10-01 14:39:49.879025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:58.347 [2024-10-01 14:39:49.879117] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:58.348 [2024-10-01 14:39:49.879134] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:14:58.348 [2024-10-01 14:39:49.879271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:58.348 [2024-10-01 14:39:49.879350] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:58.348 [2024-10-01 14:39:49.879359] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:58.348 [2024-10-01 14:39:49.879446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.348 pt2 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.348 "name": "raid_bdev1", 00:14:58.348 "uuid": "2c9080b7-d7b7-4627-97dd-a31b4e89654f", 00:14:58.348 "strip_size_kb": 0, 00:14:58.348 "state": "online", 00:14:58.348 "raid_level": "raid1", 00:14:58.348 "superblock": true, 00:14:58.348 "num_base_bdevs": 2, 00:14:58.348 "num_base_bdevs_discovered": 1, 00:14:58.348 "num_base_bdevs_operational": 1, 00:14:58.348 "base_bdevs_list": [ 00:14:58.348 { 00:14:58.348 "name": null, 00:14:58.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.348 "is_configured": false, 00:14:58.348 "data_offset": 256, 00:14:58.348 "data_size": 7936 00:14:58.348 }, 00:14:58.348 { 00:14:58.348 "name": "pt2", 00:14:58.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.348 "is_configured": true, 00:14:58.348 "data_offset": 256, 00:14:58.348 "data_size": 7936 00:14:58.348 } 00:14:58.348 ] 00:14:58.348 }' 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.348 14:39:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:58.609 [2024-10-01 14:39:50.208021] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.609 [2024-10-01 14:39:50.208093] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.609 [2024-10-01 14:39:50.208216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.609 [2024-10-01 14:39:50.208294] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.609 [2024-10-01 14:39:50.208306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:58.609 [2024-10-01 14:39:50.252008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:58.609 [2024-10-01 14:39:50.252100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.609 [2024-10-01 14:39:50.252129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:58.609 [2024-10-01 14:39:50.252140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.609 [2024-10-01 14:39:50.254854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.609 [2024-10-01 14:39:50.254905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:58.609 [2024-10-01 14:39:50.254986] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:58.609 [2024-10-01 14:39:50.255054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:58.609 [2024-10-01 14:39:50.255181] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:58.609 [2024-10-01 14:39:50.255193] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.609 [2024-10-01 14:39:50.255219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:58.609 [2024-10-01 14:39:50.255282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:58.609 [2024-10-01 14:39:50.255374] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:58.609 [2024-10-01 14:39:50.255384] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:14:58.609 [2024-10-01 14:39:50.255478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:58.609 [2024-10-01 14:39:50.255548] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:58.609 [2024-10-01 14:39:50.255559] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:58.609 [2024-10-01 14:39:50.255640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.609 pt1 00:14:58.609 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.610 "name": "raid_bdev1", 00:14:58.610 "uuid": "2c9080b7-d7b7-4627-97dd-a31b4e89654f", 00:14:58.610 "strip_size_kb": 0, 00:14:58.610 "state": "online", 00:14:58.610 "raid_level": "raid1", 00:14:58.610 "superblock": true, 00:14:58.610 "num_base_bdevs": 2, 00:14:58.610 "num_base_bdevs_discovered": 1, 00:14:58.610 "num_base_bdevs_operational": 1, 00:14:58.610 "base_bdevs_list": [ 00:14:58.610 { 00:14:58.610 "name": null, 00:14:58.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.610 "is_configured": false, 00:14:58.610 "data_offset": 256, 00:14:58.610 "data_size": 7936 00:14:58.610 }, 00:14:58.610 { 00:14:58.610 "name": "pt2", 00:14:58.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.610 "is_configured": true, 00:14:58.610 "data_offset": 256, 00:14:58.610 "data_size": 7936 00:14:58.610 } 00:14:58.610 ] 00:14:58.610 }' 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.610 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:59.182 [2024-10-01 14:39:50.592408] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 2c9080b7-d7b7-4627-97dd-a31b4e89654f '!=' 2c9080b7-d7b7-4627-97dd-a31b4e89654f ']' 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 86427 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 86427 ']' 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 86427 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86427 00:14:59.182 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:59.183 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:59.183 killing process with pid 86427 00:14:59.183 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86427' 00:14:59.183 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 86427 00:14:59.183 [2024-10-01 14:39:50.643724] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.183 14:39:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 86427 00:14:59.183 [2024-10-01 14:39:50.643873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.183 [2024-10-01 14:39:50.643937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.183 [2024-10-01 14:39:50.643963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:59.183 [2024-10-01 14:39:50.791232] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.126 14:39:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:15:00.126 00:15:00.126 real 0m4.786s 00:15:00.126 user 0m6.959s 00:15:00.126 sys 0m0.907s 00:15:00.126 14:39:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:00.126 14:39:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.126 ************************************ 00:15:00.126 END TEST raid_superblock_test_md_interleaved 00:15:00.126 ************************************ 00:15:00.126 14:39:51 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:15:00.126 14:39:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:00.126 14:39:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:00.126 14:39:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:00.126 ************************************ 00:15:00.126 START TEST raid_rebuild_test_sb_md_interleaved 00:15:00.126 ************************************ 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=86743 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 86743 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 86743 ']' 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.126 14:39:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.387 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:00.387 Zero copy mechanism will not be used. 00:15:00.387 [2024-10-01 14:39:51.840597] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:15:00.387 [2024-10-01 14:39:51.840741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86743 ] 00:15:00.387 [2024-10-01 14:39:51.990879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.722 [2024-10-01 14:39:52.247529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.984 [2024-10-01 14:39:52.400363] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.984 [2024-10-01 14:39:52.400414] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.246 BaseBdev1_malloc 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.246 [2024-10-01 14:39:52.733836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:01.246 [2024-10-01 14:39:52.733903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.246 [2024-10-01 14:39:52.733931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:01.246 [2024-10-01 14:39:52.733943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.246 [2024-10-01 14:39:52.735962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.246 [2024-10-01 14:39:52.735995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:01.246 BaseBdev1 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.246 BaseBdev2_malloc 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.246 [2024-10-01 14:39:52.789051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:01.246 [2024-10-01 14:39:52.789130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.246 [2024-10-01 14:39:52.789154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:01.246 [2024-10-01 14:39:52.789165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.246 [2024-10-01 14:39:52.791222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.246 [2024-10-01 14:39:52.791257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:01.246 BaseBdev2 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.246 spare_malloc 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.246 spare_delay 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.246 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.246 [2024-10-01 14:39:52.841006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:01.246 [2024-10-01 14:39:52.841061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.246 [2024-10-01 14:39:52.841083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:01.247 [2024-10-01 14:39:52.841094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.247 [2024-10-01 14:39:52.843106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.247 [2024-10-01 14:39:52.843137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:01.247 spare 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.247 [2024-10-01 14:39:52.849068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.247 [2024-10-01 14:39:52.850976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.247 [2024-10-01 14:39:52.851166] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:01.247 [2024-10-01 14:39:52.851180] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:01.247 [2024-10-01 14:39:52.851262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:01.247 [2024-10-01 14:39:52.851334] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:01.247 [2024-10-01 14:39:52.851343] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:01.247 [2024-10-01 14:39:52.851417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.247 "name": "raid_bdev1", 00:15:01.247 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:01.247 "strip_size_kb": 0, 00:15:01.247 "state": "online", 00:15:01.247 "raid_level": "raid1", 00:15:01.247 "superblock": true, 00:15:01.247 "num_base_bdevs": 2, 00:15:01.247 "num_base_bdevs_discovered": 2, 00:15:01.247 "num_base_bdevs_operational": 2, 00:15:01.247 "base_bdevs_list": [ 00:15:01.247 { 00:15:01.247 "name": "BaseBdev1", 00:15:01.247 "uuid": "9cf36731-906a-551f-b267-c9709cc43e11", 00:15:01.247 "is_configured": true, 00:15:01.247 "data_offset": 256, 00:15:01.247 "data_size": 7936 00:15:01.247 }, 00:15:01.247 { 00:15:01.247 "name": "BaseBdev2", 00:15:01.247 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:01.247 "is_configured": true, 00:15:01.247 "data_offset": 256, 00:15:01.247 "data_size": 7936 00:15:01.247 } 00:15:01.247 ] 00:15:01.247 }' 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.247 14:39:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.509 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:01.509 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.509 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.509 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:01.509 [2024-10-01 14:39:53.177418] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.509 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.771 [2024-10-01 14:39:53.249141] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.771 "name": "raid_bdev1", 00:15:01.771 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:01.771 "strip_size_kb": 0, 00:15:01.771 "state": "online", 00:15:01.771 "raid_level": "raid1", 00:15:01.771 "superblock": true, 00:15:01.771 "num_base_bdevs": 2, 00:15:01.771 "num_base_bdevs_discovered": 1, 00:15:01.771 "num_base_bdevs_operational": 1, 00:15:01.771 "base_bdevs_list": [ 00:15:01.771 { 00:15:01.771 "name": null, 00:15:01.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.771 "is_configured": false, 00:15:01.771 "data_offset": 0, 00:15:01.771 "data_size": 7936 00:15:01.771 }, 00:15:01.771 { 00:15:01.771 "name": "BaseBdev2", 00:15:01.771 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:01.771 "is_configured": true, 00:15:01.771 "data_offset": 256, 00:15:01.771 "data_size": 7936 00:15:01.771 } 00:15:01.771 ] 00:15:01.771 }' 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.771 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:02.033 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:02.033 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.033 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:02.033 [2024-10-01 14:39:53.589242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.033 [2024-10-01 14:39:53.600217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:02.033 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.033 14:39:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:02.033 [2024-10-01 14:39:53.602156] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.978 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.978 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.978 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.978 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.978 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.978 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.978 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.978 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.978 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:02.978 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.978 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.978 "name": "raid_bdev1", 00:15:02.978 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:02.978 "strip_size_kb": 0, 00:15:02.978 "state": "online", 00:15:02.978 "raid_level": "raid1", 00:15:02.978 "superblock": true, 00:15:02.978 "num_base_bdevs": 2, 00:15:02.978 "num_base_bdevs_discovered": 2, 00:15:02.978 "num_base_bdevs_operational": 2, 00:15:02.978 "process": { 00:15:02.978 "type": "rebuild", 00:15:02.978 "target": "spare", 00:15:02.978 "progress": { 00:15:02.978 "blocks": 2560, 00:15:02.978 "percent": 32 00:15:02.978 } 00:15:02.978 }, 00:15:02.978 "base_bdevs_list": [ 00:15:02.978 { 00:15:02.978 "name": "spare", 00:15:02.978 "uuid": "6d380311-099e-5464-98df-e6cf6958cbca", 00:15:02.978 "is_configured": true, 00:15:02.978 "data_offset": 256, 00:15:02.978 "data_size": 7936 00:15:02.978 }, 00:15:02.978 { 00:15:02.978 "name": "BaseBdev2", 00:15:02.978 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:02.978 "is_configured": true, 00:15:02.978 "data_offset": 256, 00:15:02.978 "data_size": 7936 00:15:02.978 } 00:15:02.978 ] 00:15:02.978 }' 00:15:02.978 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.238 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.238 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.238 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.238 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:03.238 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.238 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.238 [2024-10-01 14:39:54.720339] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.238 [2024-10-01 14:39:54.809477] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:03.238 [2024-10-01 14:39:54.809572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.238 [2024-10-01 14:39:54.809589] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.238 [2024-10-01 14:39:54.809599] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:03.238 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.238 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:03.238 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.238 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.238 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.238 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.239 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:03.239 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.239 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.239 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.239 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.239 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.239 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.239 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.239 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.239 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.239 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.239 "name": "raid_bdev1", 00:15:03.239 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:03.239 "strip_size_kb": 0, 00:15:03.239 "state": "online", 00:15:03.239 "raid_level": "raid1", 00:15:03.239 "superblock": true, 00:15:03.239 "num_base_bdevs": 2, 00:15:03.239 "num_base_bdevs_discovered": 1, 00:15:03.239 "num_base_bdevs_operational": 1, 00:15:03.239 "base_bdevs_list": [ 00:15:03.239 { 00:15:03.239 "name": null, 00:15:03.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.239 "is_configured": false, 00:15:03.239 "data_offset": 0, 00:15:03.239 "data_size": 7936 00:15:03.239 }, 00:15:03.239 { 00:15:03.239 "name": "BaseBdev2", 00:15:03.239 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:03.239 "is_configured": true, 00:15:03.239 "data_offset": 256, 00:15:03.239 "data_size": 7936 00:15:03.239 } 00:15:03.239 ] 00:15:03.239 }' 00:15:03.239 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.239 14:39:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.808 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.808 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.808 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.808 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.808 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.808 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.809 "name": "raid_bdev1", 00:15:03.809 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:03.809 "strip_size_kb": 0, 00:15:03.809 "state": "online", 00:15:03.809 "raid_level": "raid1", 00:15:03.809 "superblock": true, 00:15:03.809 "num_base_bdevs": 2, 00:15:03.809 "num_base_bdevs_discovered": 1, 00:15:03.809 "num_base_bdevs_operational": 1, 00:15:03.809 "base_bdevs_list": [ 00:15:03.809 { 00:15:03.809 "name": null, 00:15:03.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.809 "is_configured": false, 00:15:03.809 "data_offset": 0, 00:15:03.809 "data_size": 7936 00:15:03.809 }, 00:15:03.809 { 00:15:03.809 "name": "BaseBdev2", 00:15:03.809 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:03.809 "is_configured": true, 00:15:03.809 "data_offset": 256, 00:15:03.809 "data_size": 7936 00:15:03.809 } 00:15:03.809 ] 00:15:03.809 }' 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.809 [2024-10-01 14:39:55.309452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.809 [2024-10-01 14:39:55.319668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.809 14:39:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:03.809 [2024-10-01 14:39:55.321560] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.754 "name": "raid_bdev1", 00:15:04.754 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:04.754 "strip_size_kb": 0, 00:15:04.754 "state": "online", 00:15:04.754 "raid_level": "raid1", 00:15:04.754 "superblock": true, 00:15:04.754 "num_base_bdevs": 2, 00:15:04.754 "num_base_bdevs_discovered": 2, 00:15:04.754 "num_base_bdevs_operational": 2, 00:15:04.754 "process": { 00:15:04.754 "type": "rebuild", 00:15:04.754 "target": "spare", 00:15:04.754 "progress": { 00:15:04.754 "blocks": 2560, 00:15:04.754 "percent": 32 00:15:04.754 } 00:15:04.754 }, 00:15:04.754 "base_bdevs_list": [ 00:15:04.754 { 00:15:04.754 "name": "spare", 00:15:04.754 "uuid": "6d380311-099e-5464-98df-e6cf6958cbca", 00:15:04.754 "is_configured": true, 00:15:04.754 "data_offset": 256, 00:15:04.754 "data_size": 7936 00:15:04.754 }, 00:15:04.754 { 00:15:04.754 "name": "BaseBdev2", 00:15:04.754 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:04.754 "is_configured": true, 00:15:04.754 "data_offset": 256, 00:15:04.754 "data_size": 7936 00:15:04.754 } 00:15:04.754 ] 00:15:04.754 }' 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:04.754 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=616 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.754 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.036 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.036 "name": "raid_bdev1", 00:15:05.036 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:05.036 "strip_size_kb": 0, 00:15:05.036 "state": "online", 00:15:05.036 "raid_level": "raid1", 00:15:05.036 "superblock": true, 00:15:05.036 "num_base_bdevs": 2, 00:15:05.036 "num_base_bdevs_discovered": 2, 00:15:05.036 "num_base_bdevs_operational": 2, 00:15:05.036 "process": { 00:15:05.036 "type": "rebuild", 00:15:05.036 "target": "spare", 00:15:05.036 "progress": { 00:15:05.036 "blocks": 2560, 00:15:05.036 "percent": 32 00:15:05.036 } 00:15:05.036 }, 00:15:05.036 "base_bdevs_list": [ 00:15:05.036 { 00:15:05.036 "name": "spare", 00:15:05.036 "uuid": "6d380311-099e-5464-98df-e6cf6958cbca", 00:15:05.036 "is_configured": true, 00:15:05.036 "data_offset": 256, 00:15:05.036 "data_size": 7936 00:15:05.036 }, 00:15:05.036 { 00:15:05.036 "name": "BaseBdev2", 00:15:05.036 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:05.036 "is_configured": true, 00:15:05.036 "data_offset": 256, 00:15:05.036 "data_size": 7936 00:15:05.036 } 00:15:05.036 ] 00:15:05.036 }' 00:15:05.036 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.036 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.036 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.036 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.036 14:39:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.979 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.979 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.979 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.979 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.979 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.979 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.979 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.979 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.979 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.979 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.979 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.979 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.979 "name": "raid_bdev1", 00:15:05.979 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:05.979 "strip_size_kb": 0, 00:15:05.979 "state": "online", 00:15:05.979 "raid_level": "raid1", 00:15:05.979 "superblock": true, 00:15:05.979 "num_base_bdevs": 2, 00:15:05.979 "num_base_bdevs_discovered": 2, 00:15:05.979 "num_base_bdevs_operational": 2, 00:15:05.980 "process": { 00:15:05.980 "type": "rebuild", 00:15:05.980 "target": "spare", 00:15:05.980 "progress": { 00:15:05.980 "blocks": 5376, 00:15:05.980 "percent": 67 00:15:05.980 } 00:15:05.980 }, 00:15:05.980 "base_bdevs_list": [ 00:15:05.980 { 00:15:05.980 "name": "spare", 00:15:05.980 "uuid": "6d380311-099e-5464-98df-e6cf6958cbca", 00:15:05.980 "is_configured": true, 00:15:05.980 "data_offset": 256, 00:15:05.980 "data_size": 7936 00:15:05.980 }, 00:15:05.980 { 00:15:05.980 "name": "BaseBdev2", 00:15:05.980 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:05.980 "is_configured": true, 00:15:05.980 "data_offset": 256, 00:15:05.980 "data_size": 7936 00:15:05.980 } 00:15:05.980 ] 00:15:05.980 }' 00:15:05.980 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.980 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.980 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.980 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.980 14:39:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.924 [2024-10-01 14:39:58.438528] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:06.924 [2024-10-01 14:39:58.438629] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:06.924 [2024-10-01 14:39:58.438759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.185 "name": "raid_bdev1", 00:15:07.185 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:07.185 "strip_size_kb": 0, 00:15:07.185 "state": "online", 00:15:07.185 "raid_level": "raid1", 00:15:07.185 "superblock": true, 00:15:07.185 "num_base_bdevs": 2, 00:15:07.185 "num_base_bdevs_discovered": 2, 00:15:07.185 "num_base_bdevs_operational": 2, 00:15:07.185 "base_bdevs_list": [ 00:15:07.185 { 00:15:07.185 "name": "spare", 00:15:07.185 "uuid": "6d380311-099e-5464-98df-e6cf6958cbca", 00:15:07.185 "is_configured": true, 00:15:07.185 "data_offset": 256, 00:15:07.185 "data_size": 7936 00:15:07.185 }, 00:15:07.185 { 00:15:07.185 "name": "BaseBdev2", 00:15:07.185 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:07.185 "is_configured": true, 00:15:07.185 "data_offset": 256, 00:15:07.185 "data_size": 7936 00:15:07.185 } 00:15:07.185 ] 00:15:07.185 }' 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.185 "name": "raid_bdev1", 00:15:07.185 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:07.185 "strip_size_kb": 0, 00:15:07.185 "state": "online", 00:15:07.185 "raid_level": "raid1", 00:15:07.185 "superblock": true, 00:15:07.185 "num_base_bdevs": 2, 00:15:07.185 "num_base_bdevs_discovered": 2, 00:15:07.185 "num_base_bdevs_operational": 2, 00:15:07.185 "base_bdevs_list": [ 00:15:07.185 { 00:15:07.185 "name": "spare", 00:15:07.185 "uuid": "6d380311-099e-5464-98df-e6cf6958cbca", 00:15:07.185 "is_configured": true, 00:15:07.185 "data_offset": 256, 00:15:07.185 "data_size": 7936 00:15:07.185 }, 00:15:07.185 { 00:15:07.185 "name": "BaseBdev2", 00:15:07.185 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:07.185 "is_configured": true, 00:15:07.185 "data_offset": 256, 00:15:07.185 "data_size": 7936 00:15:07.185 } 00:15:07.185 ] 00:15:07.185 }' 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.185 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.185 "name": "raid_bdev1", 00:15:07.185 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:07.185 "strip_size_kb": 0, 00:15:07.185 "state": "online", 00:15:07.185 "raid_level": "raid1", 00:15:07.185 "superblock": true, 00:15:07.185 "num_base_bdevs": 2, 00:15:07.185 "num_base_bdevs_discovered": 2, 00:15:07.185 "num_base_bdevs_operational": 2, 00:15:07.185 "base_bdevs_list": [ 00:15:07.185 { 00:15:07.186 "name": "spare", 00:15:07.186 "uuid": "6d380311-099e-5464-98df-e6cf6958cbca", 00:15:07.186 "is_configured": true, 00:15:07.186 "data_offset": 256, 00:15:07.186 "data_size": 7936 00:15:07.186 }, 00:15:07.186 { 00:15:07.186 "name": "BaseBdev2", 00:15:07.186 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:07.186 "is_configured": true, 00:15:07.186 "data_offset": 256, 00:15:07.186 "data_size": 7936 00:15:07.186 } 00:15:07.186 ] 00:15:07.186 }' 00:15:07.186 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.186 14:39:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:07.759 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:07.760 [2024-10-01 14:39:59.138630] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.760 [2024-10-01 14:39:59.138677] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.760 [2024-10-01 14:39:59.138783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.760 [2024-10-01 14:39:59.138864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.760 [2024-10-01 14:39:59.138876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:07.760 [2024-10-01 14:39:59.190608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:07.760 [2024-10-01 14:39:59.190665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.760 [2024-10-01 14:39:59.190688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:07.760 [2024-10-01 14:39:59.190697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.760 [2024-10-01 14:39:59.192907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.760 [2024-10-01 14:39:59.192939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:07.760 [2024-10-01 14:39:59.192995] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:07.760 [2024-10-01 14:39:59.193050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.760 [2024-10-01 14:39:59.193158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.760 spare 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:07.760 [2024-10-01 14:39:59.293254] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:07.760 [2024-10-01 14:39:59.293317] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:07.760 [2024-10-01 14:39:59.293454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:07.760 [2024-10-01 14:39:59.293568] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:07.760 [2024-10-01 14:39:59.293578] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:07.760 [2024-10-01 14:39:59.293687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.760 "name": "raid_bdev1", 00:15:07.760 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:07.760 "strip_size_kb": 0, 00:15:07.760 "state": "online", 00:15:07.760 "raid_level": "raid1", 00:15:07.760 "superblock": true, 00:15:07.760 "num_base_bdevs": 2, 00:15:07.760 "num_base_bdevs_discovered": 2, 00:15:07.760 "num_base_bdevs_operational": 2, 00:15:07.760 "base_bdevs_list": [ 00:15:07.760 { 00:15:07.760 "name": "spare", 00:15:07.760 "uuid": "6d380311-099e-5464-98df-e6cf6958cbca", 00:15:07.760 "is_configured": true, 00:15:07.760 "data_offset": 256, 00:15:07.760 "data_size": 7936 00:15:07.760 }, 00:15:07.760 { 00:15:07.760 "name": "BaseBdev2", 00:15:07.760 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:07.760 "is_configured": true, 00:15:07.760 "data_offset": 256, 00:15:07.760 "data_size": 7936 00:15:07.760 } 00:15:07.760 ] 00:15:07.760 }' 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.760 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.021 "name": "raid_bdev1", 00:15:08.021 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:08.021 "strip_size_kb": 0, 00:15:08.021 "state": "online", 00:15:08.021 "raid_level": "raid1", 00:15:08.021 "superblock": true, 00:15:08.021 "num_base_bdevs": 2, 00:15:08.021 "num_base_bdevs_discovered": 2, 00:15:08.021 "num_base_bdevs_operational": 2, 00:15:08.021 "base_bdevs_list": [ 00:15:08.021 { 00:15:08.021 "name": "spare", 00:15:08.021 "uuid": "6d380311-099e-5464-98df-e6cf6958cbca", 00:15:08.021 "is_configured": true, 00:15:08.021 "data_offset": 256, 00:15:08.021 "data_size": 7936 00:15:08.021 }, 00:15:08.021 { 00:15:08.021 "name": "BaseBdev2", 00:15:08.021 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:08.021 "is_configured": true, 00:15:08.021 "data_offset": 256, 00:15:08.021 "data_size": 7936 00:15:08.021 } 00:15:08.021 ] 00:15:08.021 }' 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.021 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.282 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.282 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.282 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.282 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.282 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:08.282 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.282 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.282 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:08.282 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.282 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.283 [2024-10-01 14:39:59.758866] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.283 "name": "raid_bdev1", 00:15:08.283 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:08.283 "strip_size_kb": 0, 00:15:08.283 "state": "online", 00:15:08.283 "raid_level": "raid1", 00:15:08.283 "superblock": true, 00:15:08.283 "num_base_bdevs": 2, 00:15:08.283 "num_base_bdevs_discovered": 1, 00:15:08.283 "num_base_bdevs_operational": 1, 00:15:08.283 "base_bdevs_list": [ 00:15:08.283 { 00:15:08.283 "name": null, 00:15:08.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.283 "is_configured": false, 00:15:08.283 "data_offset": 0, 00:15:08.283 "data_size": 7936 00:15:08.283 }, 00:15:08.283 { 00:15:08.283 "name": "BaseBdev2", 00:15:08.283 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:08.283 "is_configured": true, 00:15:08.283 "data_offset": 256, 00:15:08.283 "data_size": 7936 00:15:08.283 } 00:15:08.283 ] 00:15:08.283 }' 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.283 14:39:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.544 14:40:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:08.544 14:40:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.544 14:40:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:08.544 [2024-10-01 14:40:00.091062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:08.544 [2024-10-01 14:40:00.091291] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:08.544 [2024-10-01 14:40:00.091309] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:08.544 [2024-10-01 14:40:00.091350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:08.544 [2024-10-01 14:40:00.102148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:08.544 14:40:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.544 14:40:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:08.544 [2024-10-01 14:40:00.104195] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.489 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.489 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.489 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.489 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.489 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.489 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.489 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.489 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.489 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:09.489 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.489 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.489 "name": "raid_bdev1", 00:15:09.489 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:09.489 "strip_size_kb": 0, 00:15:09.489 "state": "online", 00:15:09.489 "raid_level": "raid1", 00:15:09.489 "superblock": true, 00:15:09.489 "num_base_bdevs": 2, 00:15:09.489 "num_base_bdevs_discovered": 2, 00:15:09.489 "num_base_bdevs_operational": 2, 00:15:09.489 "process": { 00:15:09.489 "type": "rebuild", 00:15:09.489 "target": "spare", 00:15:09.489 "progress": { 00:15:09.489 "blocks": 2560, 00:15:09.489 "percent": 32 00:15:09.489 } 00:15:09.489 }, 00:15:09.489 "base_bdevs_list": [ 00:15:09.489 { 00:15:09.489 "name": "spare", 00:15:09.489 "uuid": "6d380311-099e-5464-98df-e6cf6958cbca", 00:15:09.489 "is_configured": true, 00:15:09.489 "data_offset": 256, 00:15:09.489 "data_size": 7936 00:15:09.489 }, 00:15:09.489 { 00:15:09.489 "name": "BaseBdev2", 00:15:09.489 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:09.489 "is_configured": true, 00:15:09.489 "data_offset": 256, 00:15:09.489 "data_size": 7936 00:15:09.489 } 00:15:09.489 ] 00:15:09.489 }' 00:15:09.489 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.749 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.749 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.749 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.749 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:09.749 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.749 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:09.749 [2024-10-01 14:40:01.222284] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.749 [2024-10-01 14:40:01.311962] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:09.749 [2024-10-01 14:40:01.312060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.750 [2024-10-01 14:40:01.312077] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.750 [2024-10-01 14:40:01.312087] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.750 "name": "raid_bdev1", 00:15:09.750 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:09.750 "strip_size_kb": 0, 00:15:09.750 "state": "online", 00:15:09.750 "raid_level": "raid1", 00:15:09.750 "superblock": true, 00:15:09.750 "num_base_bdevs": 2, 00:15:09.750 "num_base_bdevs_discovered": 1, 00:15:09.750 "num_base_bdevs_operational": 1, 00:15:09.750 "base_bdevs_list": [ 00:15:09.750 { 00:15:09.750 "name": null, 00:15:09.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.750 "is_configured": false, 00:15:09.750 "data_offset": 0, 00:15:09.750 "data_size": 7936 00:15:09.750 }, 00:15:09.750 { 00:15:09.750 "name": "BaseBdev2", 00:15:09.750 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:09.750 "is_configured": true, 00:15:09.750 "data_offset": 256, 00:15:09.750 "data_size": 7936 00:15:09.750 } 00:15:09.750 ] 00:15:09.750 }' 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.750 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:10.010 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.011 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.011 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:10.011 [2024-10-01 14:40:01.675854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.011 [2024-10-01 14:40:01.675934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.011 [2024-10-01 14:40:01.675963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:10.011 [2024-10-01 14:40:01.675974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.011 [2024-10-01 14:40:01.676194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.011 [2024-10-01 14:40:01.676210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.011 [2024-10-01 14:40:01.676271] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:10.011 [2024-10-01 14:40:01.676286] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:10.011 [2024-10-01 14:40:01.676296] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:10.011 [2024-10-01 14:40:01.676319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.011 [2024-10-01 14:40:01.686826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:10.011 spare 00:15:10.011 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.011 14:40:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:10.011 [2024-10-01 14:40:01.688862] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.397 "name": "raid_bdev1", 00:15:11.397 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:11.397 "strip_size_kb": 0, 00:15:11.397 "state": "online", 00:15:11.397 "raid_level": "raid1", 00:15:11.397 "superblock": true, 00:15:11.397 "num_base_bdevs": 2, 00:15:11.397 "num_base_bdevs_discovered": 2, 00:15:11.397 "num_base_bdevs_operational": 2, 00:15:11.397 "process": { 00:15:11.397 "type": "rebuild", 00:15:11.397 "target": "spare", 00:15:11.397 "progress": { 00:15:11.397 "blocks": 2560, 00:15:11.397 "percent": 32 00:15:11.397 } 00:15:11.397 }, 00:15:11.397 "base_bdevs_list": [ 00:15:11.397 { 00:15:11.397 "name": "spare", 00:15:11.397 "uuid": "6d380311-099e-5464-98df-e6cf6958cbca", 00:15:11.397 "is_configured": true, 00:15:11.397 "data_offset": 256, 00:15:11.397 "data_size": 7936 00:15:11.397 }, 00:15:11.397 { 00:15:11.397 "name": "BaseBdev2", 00:15:11.397 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:11.397 "is_configured": true, 00:15:11.397 "data_offset": 256, 00:15:11.397 "data_size": 7936 00:15:11.397 } 00:15:11.397 ] 00:15:11.397 }' 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.397 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:11.397 [2024-10-01 14:40:02.790716] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.398 [2024-10-01 14:40:02.795741] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:11.398 [2024-10-01 14:40:02.795793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.398 [2024-10-01 14:40:02.795808] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.398 [2024-10-01 14:40:02.795814] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.398 "name": "raid_bdev1", 00:15:11.398 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:11.398 "strip_size_kb": 0, 00:15:11.398 "state": "online", 00:15:11.398 "raid_level": "raid1", 00:15:11.398 "superblock": true, 00:15:11.398 "num_base_bdevs": 2, 00:15:11.398 "num_base_bdevs_discovered": 1, 00:15:11.398 "num_base_bdevs_operational": 1, 00:15:11.398 "base_bdevs_list": [ 00:15:11.398 { 00:15:11.398 "name": null, 00:15:11.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.398 "is_configured": false, 00:15:11.398 "data_offset": 0, 00:15:11.398 "data_size": 7936 00:15:11.398 }, 00:15:11.398 { 00:15:11.398 "name": "BaseBdev2", 00:15:11.398 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:11.398 "is_configured": true, 00:15:11.398 "data_offset": 256, 00:15:11.398 "data_size": 7936 00:15:11.398 } 00:15:11.398 ] 00:15:11.398 }' 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.398 14:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.659 "name": "raid_bdev1", 00:15:11.659 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:11.659 "strip_size_kb": 0, 00:15:11.659 "state": "online", 00:15:11.659 "raid_level": "raid1", 00:15:11.659 "superblock": true, 00:15:11.659 "num_base_bdevs": 2, 00:15:11.659 "num_base_bdevs_discovered": 1, 00:15:11.659 "num_base_bdevs_operational": 1, 00:15:11.659 "base_bdevs_list": [ 00:15:11.659 { 00:15:11.659 "name": null, 00:15:11.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.659 "is_configured": false, 00:15:11.659 "data_offset": 0, 00:15:11.659 "data_size": 7936 00:15:11.659 }, 00:15:11.659 { 00:15:11.659 "name": "BaseBdev2", 00:15:11.659 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:11.659 "is_configured": true, 00:15:11.659 "data_offset": 256, 00:15:11.659 "data_size": 7936 00:15:11.659 } 00:15:11.659 ] 00:15:11.659 }' 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:11.659 [2024-10-01 14:40:03.244062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:11.659 [2024-10-01 14:40:03.244129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.659 [2024-10-01 14:40:03.244152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:11.659 [2024-10-01 14:40:03.244160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.659 [2024-10-01 14:40:03.244328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.659 [2024-10-01 14:40:03.244339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:11.659 [2024-10-01 14:40:03.244383] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:11.659 [2024-10-01 14:40:03.244395] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:11.659 [2024-10-01 14:40:03.244404] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:11.659 [2024-10-01 14:40:03.244413] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:11.659 BaseBdev1 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.659 14:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.603 "name": "raid_bdev1", 00:15:12.603 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:12.603 "strip_size_kb": 0, 00:15:12.603 "state": "online", 00:15:12.603 "raid_level": "raid1", 00:15:12.603 "superblock": true, 00:15:12.603 "num_base_bdevs": 2, 00:15:12.603 "num_base_bdevs_discovered": 1, 00:15:12.603 "num_base_bdevs_operational": 1, 00:15:12.603 "base_bdevs_list": [ 00:15:12.603 { 00:15:12.603 "name": null, 00:15:12.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.603 "is_configured": false, 00:15:12.603 "data_offset": 0, 00:15:12.603 "data_size": 7936 00:15:12.603 }, 00:15:12.603 { 00:15:12.603 "name": "BaseBdev2", 00:15:12.603 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:12.603 "is_configured": true, 00:15:12.603 "data_offset": 256, 00:15:12.603 "data_size": 7936 00:15:12.603 } 00:15:12.603 ] 00:15:12.603 }' 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.603 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:12.864 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.864 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.864 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.864 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.864 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.864 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.864 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.864 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:13.124 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.124 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.124 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.124 "name": "raid_bdev1", 00:15:13.124 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:13.124 "strip_size_kb": 0, 00:15:13.124 "state": "online", 00:15:13.124 "raid_level": "raid1", 00:15:13.124 "superblock": true, 00:15:13.124 "num_base_bdevs": 2, 00:15:13.124 "num_base_bdevs_discovered": 1, 00:15:13.124 "num_base_bdevs_operational": 1, 00:15:13.124 "base_bdevs_list": [ 00:15:13.124 { 00:15:13.124 "name": null, 00:15:13.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.124 "is_configured": false, 00:15:13.124 "data_offset": 0, 00:15:13.124 "data_size": 7936 00:15:13.124 }, 00:15:13.124 { 00:15:13.124 "name": "BaseBdev2", 00:15:13.124 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:13.124 "is_configured": true, 00:15:13.124 "data_offset": 256, 00:15:13.124 "data_size": 7936 00:15:13.124 } 00:15:13.124 ] 00:15:13.124 }' 00:15:13.124 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.124 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.124 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.124 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:13.125 [2024-10-01 14:40:04.644401] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.125 [2024-10-01 14:40:04.644578] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:13.125 [2024-10-01 14:40:04.644593] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:13.125 request: 00:15:13.125 { 00:15:13.125 "base_bdev": "BaseBdev1", 00:15:13.125 "raid_bdev": "raid_bdev1", 00:15:13.125 "method": "bdev_raid_add_base_bdev", 00:15:13.125 "req_id": 1 00:15:13.125 } 00:15:13.125 Got JSON-RPC error response 00:15:13.125 response: 00:15:13.125 { 00:15:13.125 "code": -22, 00:15:13.125 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:13.125 } 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:13.125 14:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.068 "name": "raid_bdev1", 00:15:14.068 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:14.068 "strip_size_kb": 0, 00:15:14.068 "state": "online", 00:15:14.068 "raid_level": "raid1", 00:15:14.068 "superblock": true, 00:15:14.068 "num_base_bdevs": 2, 00:15:14.068 "num_base_bdevs_discovered": 1, 00:15:14.068 "num_base_bdevs_operational": 1, 00:15:14.068 "base_bdevs_list": [ 00:15:14.068 { 00:15:14.068 "name": null, 00:15:14.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.068 "is_configured": false, 00:15:14.068 "data_offset": 0, 00:15:14.068 "data_size": 7936 00:15:14.068 }, 00:15:14.068 { 00:15:14.068 "name": "BaseBdev2", 00:15:14.068 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:14.068 "is_configured": true, 00:15:14.068 "data_offset": 256, 00:15:14.068 "data_size": 7936 00:15:14.068 } 00:15:14.068 ] 00:15:14.068 }' 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.068 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.330 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.330 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.330 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.330 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.330 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.330 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.330 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.330 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.330 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.330 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.330 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.330 "name": "raid_bdev1", 00:15:14.330 "uuid": "f3c2094f-b012-48f5-99d1-40e39769a89f", 00:15:14.330 "strip_size_kb": 0, 00:15:14.330 "state": "online", 00:15:14.330 "raid_level": "raid1", 00:15:14.330 "superblock": true, 00:15:14.330 "num_base_bdevs": 2, 00:15:14.330 "num_base_bdevs_discovered": 1, 00:15:14.330 "num_base_bdevs_operational": 1, 00:15:14.330 "base_bdevs_list": [ 00:15:14.330 { 00:15:14.330 "name": null, 00:15:14.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.330 "is_configured": false, 00:15:14.330 "data_offset": 0, 00:15:14.330 "data_size": 7936 00:15:14.330 }, 00:15:14.330 { 00:15:14.330 "name": "BaseBdev2", 00:15:14.330 "uuid": "918c9fbd-082a-596e-981c-59ce193a54ba", 00:15:14.330 "is_configured": true, 00:15:14.330 "data_offset": 256, 00:15:14.330 "data_size": 7936 00:15:14.330 } 00:15:14.330 ] 00:15:14.330 }' 00:15:14.330 14:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 86743 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 86743 ']' 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 86743 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86743 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:14.589 killing process with pid 86743 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86743' 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 86743 00:15:14.589 Received shutdown signal, test time was about 60.000000 seconds 00:15:14.589 00:15:14.589 Latency(us) 00:15:14.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.589 =================================================================================================================== 00:15:14.589 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:14.589 [2024-10-01 14:40:06.082518] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.589 14:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 86743 00:15:14.589 [2024-10-01 14:40:06.082646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.589 [2024-10-01 14:40:06.082720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.589 [2024-10-01 14:40:06.082739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:14.589 [2024-10-01 14:40:06.268839] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.524 14:40:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:15:15.524 00:15:15.524 real 0m15.316s 00:15:15.524 user 0m19.379s 00:15:15.524 sys 0m1.130s 00:15:15.524 14:40:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.524 14:40:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:15.524 ************************************ 00:15:15.524 END TEST raid_rebuild_test_sb_md_interleaved 00:15:15.524 ************************************ 00:15:15.524 14:40:07 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:15:15.524 14:40:07 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:15:15.524 14:40:07 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 86743 ']' 00:15:15.524 14:40:07 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 86743 00:15:15.524 14:40:07 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:15:15.524 00:15:15.524 real 9m56.835s 00:15:15.524 user 13m6.714s 00:15:15.524 sys 1m23.361s 00:15:15.524 14:40:07 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.524 14:40:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.524 ************************************ 00:15:15.524 END TEST bdev_raid 00:15:15.524 ************************************ 00:15:15.524 14:40:07 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:15:15.524 14:40:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:15.524 14:40:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:15.524 14:40:07 -- common/autotest_common.sh@10 -- # set +x 00:15:15.782 ************************************ 00:15:15.782 START TEST spdkcli_raid 00:15:15.782 ************************************ 00:15:15.782 14:40:07 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:15:15.782 * Looking for test storage... 00:15:15.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:15.782 14:40:07 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:15.782 14:40:07 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:15.783 14:40:07 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:15.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.783 --rc genhtml_branch_coverage=1 00:15:15.783 --rc genhtml_function_coverage=1 00:15:15.783 --rc genhtml_legend=1 00:15:15.783 --rc geninfo_all_blocks=1 00:15:15.783 --rc geninfo_unexecuted_blocks=1 00:15:15.783 00:15:15.783 ' 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:15.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.783 --rc genhtml_branch_coverage=1 00:15:15.783 --rc genhtml_function_coverage=1 00:15:15.783 --rc genhtml_legend=1 00:15:15.783 --rc geninfo_all_blocks=1 00:15:15.783 --rc geninfo_unexecuted_blocks=1 00:15:15.783 00:15:15.783 ' 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:15.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.783 --rc genhtml_branch_coverage=1 00:15:15.783 --rc genhtml_function_coverage=1 00:15:15.783 --rc genhtml_legend=1 00:15:15.783 --rc geninfo_all_blocks=1 00:15:15.783 --rc geninfo_unexecuted_blocks=1 00:15:15.783 00:15:15.783 ' 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:15.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.783 --rc genhtml_branch_coverage=1 00:15:15.783 --rc genhtml_function_coverage=1 00:15:15.783 --rc genhtml_legend=1 00:15:15.783 --rc geninfo_all_blocks=1 00:15:15.783 --rc geninfo_unexecuted_blocks=1 00:15:15.783 00:15:15.783 ' 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:15:15.783 14:40:07 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=87393 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 87393 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 87393 ']' 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.783 14:40:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.783 14:40:07 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:15:16.041 [2024-10-01 14:40:07.476339] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:15:16.041 [2024-10-01 14:40:07.476501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87393 ] 00:15:16.041 [2024-10-01 14:40:07.638376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:16.306 [2024-10-01 14:40:07.830230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.306 [2024-10-01 14:40:07.830287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.874 14:40:08 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:16.874 14:40:08 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:15:16.874 14:40:08 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:15:16.874 14:40:08 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:16.874 14:40:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:16.874 14:40:08 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:15:16.874 14:40:08 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:16.874 14:40:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:16.875 14:40:08 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:15:16.875 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:15:16.875 ' 00:15:18.259 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:15:18.259 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:15:18.519 14:40:10 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:15:18.519 14:40:10 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:18.519 14:40:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.519 14:40:10 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:15:18.519 14:40:10 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:18.519 14:40:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.519 14:40:10 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:15:18.519 ' 00:15:19.461 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:15:19.721 14:40:11 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:15:19.721 14:40:11 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:19.721 14:40:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:19.721 14:40:11 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:15:19.721 14:40:11 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:19.721 14:40:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:19.721 14:40:11 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:15:19.721 14:40:11 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:15:20.289 14:40:11 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:15:20.289 14:40:11 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:15:20.289 14:40:11 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:15:20.289 14:40:11 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:20.289 14:40:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:20.289 14:40:11 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:15:20.289 14:40:11 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:20.289 14:40:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:20.289 14:40:11 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:15:20.289 ' 00:15:21.227 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:15:21.227 14:40:12 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:15:21.227 14:40:12 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:21.227 14:40:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.227 14:40:12 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:15:21.227 14:40:12 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:21.227 14:40:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.227 14:40:12 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:15:21.227 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:15:21.227 ' 00:15:22.611 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:15:22.611 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:15:22.872 14:40:14 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:15:22.872 14:40:14 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:22.872 14:40:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.872 14:40:14 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 87393 00:15:22.872 14:40:14 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 87393 ']' 00:15:22.872 14:40:14 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 87393 00:15:22.872 14:40:14 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:15:22.872 14:40:14 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:22.872 14:40:14 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87393 00:15:22.872 killing process with pid 87393 00:15:22.872 14:40:14 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:22.872 14:40:14 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:22.872 14:40:14 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87393' 00:15:22.872 14:40:14 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 87393 00:15:22.872 14:40:14 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 87393 00:15:24.780 Process with pid 87393 is not found 00:15:24.780 14:40:16 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:15:24.780 14:40:16 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 87393 ']' 00:15:24.780 14:40:16 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 87393 00:15:24.780 14:40:16 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 87393 ']' 00:15:24.780 14:40:16 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 87393 00:15:24.780 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (87393) - No such process 00:15:24.780 14:40:16 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 87393 is not found' 00:15:24.780 14:40:16 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:15:24.780 14:40:16 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:15:24.780 14:40:16 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:15:24.780 14:40:16 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:15:24.780 00:15:24.780 real 0m8.829s 00:15:24.780 user 0m18.157s 00:15:24.780 sys 0m0.788s 00:15:24.780 14:40:16 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:24.780 ************************************ 00:15:24.780 END TEST spdkcli_raid 00:15:24.780 ************************************ 00:15:24.780 14:40:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.780 14:40:16 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:15:24.780 14:40:16 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:24.780 14:40:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:24.780 14:40:16 -- common/autotest_common.sh@10 -- # set +x 00:15:24.780 ************************************ 00:15:24.780 START TEST blockdev_raid5f 00:15:24.780 ************************************ 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:15:24.781 * Looking for test storage... 00:15:24.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.781 14:40:16 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:24.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.781 --rc genhtml_branch_coverage=1 00:15:24.781 --rc genhtml_function_coverage=1 00:15:24.781 --rc genhtml_legend=1 00:15:24.781 --rc geninfo_all_blocks=1 00:15:24.781 --rc geninfo_unexecuted_blocks=1 00:15:24.781 00:15:24.781 ' 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:24.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.781 --rc genhtml_branch_coverage=1 00:15:24.781 --rc genhtml_function_coverage=1 00:15:24.781 --rc genhtml_legend=1 00:15:24.781 --rc geninfo_all_blocks=1 00:15:24.781 --rc geninfo_unexecuted_blocks=1 00:15:24.781 00:15:24.781 ' 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:24.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.781 --rc genhtml_branch_coverage=1 00:15:24.781 --rc genhtml_function_coverage=1 00:15:24.781 --rc genhtml_legend=1 00:15:24.781 --rc geninfo_all_blocks=1 00:15:24.781 --rc geninfo_unexecuted_blocks=1 00:15:24.781 00:15:24.781 ' 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:24.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.781 --rc genhtml_branch_coverage=1 00:15:24.781 --rc genhtml_function_coverage=1 00:15:24.781 --rc genhtml_legend=1 00:15:24.781 --rc geninfo_all_blocks=1 00:15:24.781 --rc geninfo_unexecuted_blocks=1 00:15:24.781 00:15:24.781 ' 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=87662 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 87662 00:15:24.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 87662 ']' 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.781 14:40:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:24.781 14:40:16 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:24.781 [2024-10-01 14:40:16.353936] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:15:24.781 [2024-10-01 14:40:16.354058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87662 ] 00:15:25.041 [2024-10-01 14:40:16.503585] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.041 [2024-10-01 14:40:16.694480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.612 14:40:17 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.612 14:40:17 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:15:25.612 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:25.612 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:15:25.612 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:15:25.612 14:40:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.612 14:40:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:25.872 Malloc0 00:15:25.872 Malloc1 00:15:25.872 Malloc2 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "a1355a5d-4d05-4c0c-a113-b8b2b4ddf643"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a1355a5d-4d05-4c0c-a113-b8b2b4ddf643",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "a1355a5d-4d05-4c0c-a113-b8b2b4ddf643",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "d1ee3924-6dfc-4593-9e71-64611529b6a3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "97dc0705-1342-4694-b320-5e5a319ad0b6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "5df48c75-5c24-487f-b627-be15333d0bac",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:25.872 14:40:17 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 87662 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 87662 ']' 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 87662 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87662 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87662' 00:15:25.872 killing process with pid 87662 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 87662 00:15:25.872 14:40:17 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 87662 00:15:27.814 14:40:19 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:27.814 14:40:19 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:15:27.814 14:40:19 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:27.814 14:40:19 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:27.814 14:40:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:27.814 ************************************ 00:15:27.814 START TEST bdev_hello_world 00:15:27.814 ************************************ 00:15:27.814 14:40:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:15:27.814 [2024-10-01 14:40:19.410689] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:15:27.814 [2024-10-01 14:40:19.410839] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87718 ] 00:15:28.075 [2024-10-01 14:40:19.562412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.075 [2024-10-01 14:40:19.752366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.648 [2024-10-01 14:40:20.145928] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:28.648 [2024-10-01 14:40:20.145979] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:15:28.648 [2024-10-01 14:40:20.145995] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:28.648 [2024-10-01 14:40:20.146447] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:28.648 [2024-10-01 14:40:20.146563] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:28.648 [2024-10-01 14:40:20.146576] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:28.648 [2024-10-01 14:40:20.146632] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:28.648 00:15:28.648 [2024-10-01 14:40:20.146648] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:29.592 00:15:29.592 real 0m1.782s 00:15:29.592 user 0m1.450s 00:15:29.592 sys 0m0.207s 00:15:29.592 14:40:21 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.592 14:40:21 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:29.592 ************************************ 00:15:29.592 END TEST bdev_hello_world 00:15:29.592 ************************************ 00:15:29.592 14:40:21 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:29.592 14:40:21 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:29.592 14:40:21 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:29.592 14:40:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:29.592 ************************************ 00:15:29.592 START TEST bdev_bounds 00:15:29.592 ************************************ 00:15:29.592 14:40:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:15:29.592 14:40:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:29.592 14:40:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=87761 00:15:29.592 Process bdevio pid: 87761 00:15:29.592 14:40:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:29.592 14:40:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 87761' 00:15:29.592 14:40:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 87761 00:15:29.592 14:40:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 87761 ']' 00:15:29.592 14:40:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.592 14:40:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.592 14:40:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.592 14:40:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.592 14:40:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:29.592 [2024-10-01 14:40:21.262727] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:15:29.592 [2024-10-01 14:40:21.262857] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87761 ] 00:15:29.854 [2024-10-01 14:40:21.414997] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:30.115 [2024-10-01 14:40:21.606647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.115 [2024-10-01 14:40:21.607001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.115 [2024-10-01 14:40:21.607100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.689 14:40:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.689 14:40:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:15:30.689 14:40:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:30.689 I/O targets: 00:15:30.689 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:15:30.689 00:15:30.689 00:15:30.689 CUnit - A unit testing framework for C - Version 2.1-3 00:15:30.689 http://cunit.sourceforge.net/ 00:15:30.689 00:15:30.689 00:15:30.689 Suite: bdevio tests on: raid5f 00:15:30.689 Test: blockdev write read block ...passed 00:15:30.689 Test: blockdev write zeroes read block ...passed 00:15:30.689 Test: blockdev write zeroes read no split ...passed 00:15:30.689 Test: blockdev write zeroes read split ...passed 00:15:30.951 Test: blockdev write zeroes read split partial ...passed 00:15:30.951 Test: blockdev reset ...passed 00:15:30.951 Test: blockdev write read 8 blocks ...passed 00:15:30.951 Test: blockdev write read size > 128k ...passed 00:15:30.951 Test: blockdev write read invalid size ...passed 00:15:30.951 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.951 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.951 Test: blockdev write read max offset ...passed 00:15:30.951 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.951 Test: blockdev writev readv 8 blocks ...passed 00:15:30.951 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.951 Test: blockdev writev readv block ...passed 00:15:30.951 Test: blockdev writev readv size > 128k ...passed 00:15:30.951 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.951 Test: blockdev comparev and writev ...passed 00:15:30.951 Test: blockdev nvme passthru rw ...passed 00:15:30.951 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.951 Test: blockdev nvme admin passthru ...passed 00:15:30.951 Test: blockdev copy ...passed 00:15:30.951 00:15:30.951 Run Summary: Type Total Ran Passed Failed Inactive 00:15:30.951 suites 1 1 n/a 0 0 00:15:30.951 tests 23 23 23 0 0 00:15:30.951 asserts 130 130 130 0 n/a 00:15:30.951 00:15:30.951 Elapsed time = 0.462 seconds 00:15:30.951 0 00:15:30.951 14:40:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 87761 00:15:30.951 14:40:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 87761 ']' 00:15:30.951 14:40:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 87761 00:15:30.951 14:40:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:15:30.951 14:40:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.951 14:40:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87761 00:15:30.951 killing process with pid 87761 00:15:30.951 14:40:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:30.951 14:40:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:30.951 14:40:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87761' 00:15:30.951 14:40:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 87761 00:15:30.951 14:40:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 87761 00:15:31.904 14:40:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:31.904 00:15:31.904 real 0m2.243s 00:15:31.904 user 0m5.281s 00:15:31.904 sys 0m0.269s 00:15:31.904 14:40:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:31.904 14:40:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:31.904 ************************************ 00:15:31.904 END TEST bdev_bounds 00:15:31.904 ************************************ 00:15:31.904 14:40:23 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:15:31.904 14:40:23 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:31.904 14:40:23 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:31.904 14:40:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:31.904 ************************************ 00:15:31.904 START TEST bdev_nbd 00:15:31.904 ************************************ 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=87815 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 87815 /var/tmp/spdk-nbd.sock 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 87815 ']' 00:15:31.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:31.904 14:40:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:31.904 [2024-10-01 14:40:23.582112] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:15:31.904 [2024-10-01 14:40:23.582255] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.179 [2024-10-01 14:40:23.732004] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.439 [2024-10-01 14:40:23.924267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.012 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.012 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:15:33.012 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.013 1+0 records in 00:15:33.013 1+0 records out 00:15:33.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562013 s, 7.3 MB/s 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:15:33.013 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:33.274 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd0", 00:15:33.274 "bdev_name": "raid5f" 00:15:33.274 } 00:15:33.274 ]' 00:15:33.274 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:33.274 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd0", 00:15:33.274 "bdev_name": "raid5f" 00:15:33.274 } 00:15:33.274 ]' 00:15:33.274 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:33.274 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:33.274 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.274 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:33.274 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:33.274 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:33.274 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.274 14:40:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:33.535 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:33.535 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:33.535 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:33.535 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.535 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.535 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:33.535 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:33.535 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.535 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:33.535 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.535 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.796 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:15:34.058 /dev/nbd0 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.058 1+0 records in 00:15:34.058 1+0 records out 00:15:34.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003865 s, 10.6 MB/s 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.058 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:34.319 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:34.319 { 00:15:34.319 "nbd_device": "/dev/nbd0", 00:15:34.319 "bdev_name": "raid5f" 00:15:34.319 } 00:15:34.319 ]' 00:15:34.319 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:34.319 { 00:15:34.319 "nbd_device": "/dev/nbd0", 00:15:34.319 "bdev_name": "raid5f" 00:15:34.319 } 00:15:34.319 ]' 00:15:34.319 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:34.319 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:15:34.319 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:15:34.319 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:34.319 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:34.320 256+0 records in 00:15:34.320 256+0 records out 00:15:34.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00417259 s, 251 MB/s 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:34.320 256+0 records in 00:15:34.320 256+0 records out 00:15:34.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0897442 s, 11.7 MB/s 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.320 14:40:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:34.580 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.580 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.580 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.580 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.580 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.580 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.580 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.580 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.580 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:34.580 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.580 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:34.842 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:35.103 malloc_lvol_verify 00:15:35.103 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:35.365 c1db0352-f29f-47f6-a2c7-33b5fa1d0b09 00:15:35.365 14:40:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:35.627 fa00f3c0-a2b3-4c91-a3a1-c569fea0e926 00:15:35.627 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:35.627 /dev/nbd0 00:15:35.627 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:35.627 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:35.627 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:35.627 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:35.627 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:35.627 mke2fs 1.47.0 (5-Feb-2023) 00:15:35.627 Discarding device blocks: 0/4096 done 00:15:35.627 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:35.627 00:15:35.627 Allocating group tables: 0/1 done 00:15:35.627 Writing inode tables: 0/1 done 00:15:35.627 Creating journal (1024 blocks): done 00:15:35.627 Writing superblocks and filesystem accounting information: 0/1 done 00:15:35.627 00:15:35.627 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:35.627 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.627 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:35.627 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:35.627 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:35.627 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.627 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 87815 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 87815 ']' 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 87815 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87815 00:15:35.887 killing process with pid 87815 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87815' 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 87815 00:15:35.887 14:40:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 87815 00:15:37.274 ************************************ 00:15:37.274 END TEST bdev_nbd 00:15:37.274 ************************************ 00:15:37.274 14:40:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:37.274 00:15:37.274 real 0m5.047s 00:15:37.274 user 0m7.157s 00:15:37.274 sys 0m0.966s 00:15:37.274 14:40:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:37.274 14:40:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:37.274 14:40:28 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:37.274 14:40:28 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:15:37.274 14:40:28 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:15:37.274 14:40:28 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:37.274 14:40:28 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:37.274 14:40:28 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:37.274 14:40:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:37.274 ************************************ 00:15:37.274 START TEST bdev_fio 00:15:37.274 ************************************ 00:15:37.274 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:37.274 ************************************ 00:15:37.274 START TEST bdev_fio_rw_verify 00:15:37.274 ************************************ 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:37.274 14:40:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:37.274 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:37.274 fio-3.35 00:15:37.274 Starting 1 thread 00:15:49.500 00:15:49.500 job_raid5f: (groupid=0, jobs=1): err= 0: pid=88010: Tue Oct 1 14:40:39 2024 00:15:49.500 read: IOPS=10.2k, BW=40.0MiB/s (41.9MB/s)(400MiB/10001msec) 00:15:49.500 slat (nsec): min=22246, max=56690, avg=23341.52, stdev=1648.49 00:15:49.500 clat (usec): min=11, max=402, avg=159.26, stdev=54.95 00:15:49.501 lat (usec): min=34, max=445, avg=182.61, stdev=55.03 00:15:49.501 clat percentiles (usec): 00:15:49.501 | 50.000th=[ 167], 99.000th=[ 251], 99.900th=[ 302], 99.990th=[ 359], 00:15:49.501 | 99.999th=[ 404] 00:15:49.501 write: IOPS=10.7k, BW=41.8MiB/s (43.8MB/s)(413MiB/9870msec); 0 zone resets 00:15:49.501 slat (nsec): min=9430, max=74756, avg=19957.74, stdev=2342.06 00:15:49.501 clat (usec): min=67, max=754, avg=356.95, stdev=43.04 00:15:49.501 lat (usec): min=86, max=787, avg=376.91, stdev=43.40 00:15:49.501 clat percentiles (usec): 00:15:49.501 | 50.000th=[ 363], 99.000th=[ 429], 99.900th=[ 570], 99.990th=[ 619], 00:15:49.501 | 99.999th=[ 725] 00:15:49.501 bw ( KiB/s): min=38808, max=45784, per=99.15%, avg=42439.16, stdev=1577.28, samples=19 00:15:49.501 iops : min= 9702, max=11446, avg=10609.79, stdev=394.32, samples=19 00:15:49.501 lat (usec) : 20=0.01%, 50=0.01%, 100=11.35%, 250=37.38%, 500=51.11% 00:15:49.501 lat (usec) : 750=0.15%, 1000=0.01% 00:15:49.501 cpu : usr=99.21%, sys=0.20%, ctx=22, majf=0, minf=8590 00:15:49.501 IO depths : 1=7.7%, 2=20.0%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:49.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.501 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.501 issued rwts: total=102412,105614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.501 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:49.501 00:15:49.501 Run status group 0 (all jobs): 00:15:49.501 READ: bw=40.0MiB/s (41.9MB/s), 40.0MiB/s-40.0MiB/s (41.9MB/s-41.9MB/s), io=400MiB (419MB), run=10001-10001msec 00:15:49.501 WRITE: bw=41.8MiB/s (43.8MB/s), 41.8MiB/s-41.8MiB/s (43.8MB/s-43.8MB/s), io=413MiB (433MB), run=9870-9870msec 00:15:49.501 ----------------------------------------------------- 00:15:49.501 Suppressions used: 00:15:49.501 count bytes template 00:15:49.501 1 7 /usr/src/fio/parse.c 00:15:49.501 22 2112 /usr/src/fio/iolog.c 00:15:49.501 1 8 libtcmalloc_minimal.so 00:15:49.501 1 904 libcrypto.so 00:15:49.501 ----------------------------------------------------- 00:15:49.501 00:15:49.501 00:15:49.501 real 0m12.055s 00:15:49.501 user 0m12.689s 00:15:49.501 sys 0m0.496s 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:49.501 ************************************ 00:15:49.501 END TEST bdev_fio_rw_verify 00:15:49.501 ************************************ 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "a1355a5d-4d05-4c0c-a113-b8b2b4ddf643"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a1355a5d-4d05-4c0c-a113-b8b2b4ddf643",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "a1355a5d-4d05-4c0c-a113-b8b2b4ddf643",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "d1ee3924-6dfc-4593-9e71-64611529b6a3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "97dc0705-1342-4694-b320-5e5a319ad0b6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "5df48c75-5c24-487f-b627-be15333d0bac",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:49.501 /home/vagrant/spdk_repo/spdk 00:15:49.501 ************************************ 00:15:49.501 END TEST bdev_fio 00:15:49.501 ************************************ 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:49.501 00:15:49.501 real 0m12.257s 00:15:49.501 user 0m12.766s 00:15:49.501 sys 0m0.575s 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:49.501 14:40:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:49.501 14:40:40 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:49.501 14:40:40 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:49.501 14:40:40 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:15:49.501 14:40:40 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:49.501 14:40:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:49.501 ************************************ 00:15:49.501 START TEST bdev_verify 00:15:49.501 ************************************ 00:15:49.501 14:40:40 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:49.501 [2024-10-01 14:40:41.019248] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:15:49.501 [2024-10-01 14:40:41.019367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88175 ] 00:15:49.501 [2024-10-01 14:40:41.166629] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:49.762 [2024-10-01 14:40:41.357813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.762 [2024-10-01 14:40:41.357829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.333 Running I/O for 5 seconds... 00:15:55.484 16034.00 IOPS, 62.63 MiB/s 15923.00 IOPS, 62.20 MiB/s 15964.00 IOPS, 62.36 MiB/s 15988.50 IOPS, 62.46 MiB/s 16096.40 IOPS, 62.88 MiB/s 00:15:55.485 Latency(us) 00:15:55.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.485 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:55.485 Verification LBA range: start 0x0 length 0x2000 00:15:55.485 raid5f : 5.01 7854.24 30.68 0.00 0.00 24023.53 259.94 47992.52 00:15:55.485 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:55.485 Verification LBA range: start 0x2000 length 0x2000 00:15:55.485 raid5f : 5.01 8212.17 32.08 0.00 0.00 23557.66 206.38 17644.31 00:15:55.485 =================================================================================================================== 00:15:55.485 Total : 16066.41 62.76 0.00 0.00 23785.36 206.38 47992.52 00:15:56.444 00:15:56.444 real 0m6.825s 00:15:56.444 user 0m12.553s 00:15:56.444 sys 0m0.211s 00:15:56.444 ************************************ 00:15:56.444 END TEST bdev_verify 00:15:56.444 ************************************ 00:15:56.444 14:40:47 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.444 14:40:47 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:56.444 14:40:47 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:56.444 14:40:47 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:15:56.444 14:40:47 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.444 14:40:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:15:56.444 ************************************ 00:15:56.444 START TEST bdev_verify_big_io 00:15:56.444 ************************************ 00:15:56.444 14:40:47 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:56.444 [2024-10-01 14:40:47.912255] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:15:56.444 [2024-10-01 14:40:47.912547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88269 ] 00:15:56.444 [2024-10-01 14:40:48.061511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:56.727 [2024-10-01 14:40:48.254645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.727 [2024-10-01 14:40:48.254807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.987 Running I/O for 5 seconds... 00:16:02.388 760.00 IOPS, 47.50 MiB/s 854.50 IOPS, 53.41 MiB/s 846.00 IOPS, 52.88 MiB/s 825.00 IOPS, 51.56 MiB/s 812.40 IOPS, 50.77 MiB/s 00:16:02.388 Latency(us) 00:16:02.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.388 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:02.388 Verification LBA range: start 0x0 length 0x200 00:16:02.388 raid5f : 5.24 411.94 25.75 0.00 0.00 7575526.19 166.20 364581.81 00:16:02.388 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:02.388 Verification LBA range: start 0x200 length 0x200 00:16:02.388 raid5f : 5.23 413.05 25.82 0.00 0.00 7488360.39 206.38 364581.81 00:16:02.388 =================================================================================================================== 00:16:02.388 Total : 824.98 51.56 0.00 0.00 7531923.10 166.20 364581.81 00:16:03.329 00:16:03.329 real 0m7.056s 00:16:03.329 user 0m13.012s 00:16:03.329 sys 0m0.208s 00:16:03.329 ************************************ 00:16:03.329 END TEST bdev_verify_big_io 00:16:03.329 ************************************ 00:16:03.329 14:40:54 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:03.329 14:40:54 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.329 14:40:54 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:03.329 14:40:54 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:03.329 14:40:54 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:03.329 14:40:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:03.329 ************************************ 00:16:03.329 START TEST bdev_write_zeroes 00:16:03.329 ************************************ 00:16:03.329 14:40:54 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:03.593 [2024-10-01 14:40:55.027105] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:16:03.593 [2024-10-01 14:40:55.027409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88362 ] 00:16:03.593 [2024-10-01 14:40:55.177624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.860 [2024-10-01 14:40:55.368452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.120 Running I/O for 1 seconds... 00:16:05.500 22935.00 IOPS, 89.59 MiB/s 00:16:05.500 Latency(us) 00:16:05.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.500 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:05.500 raid5f : 1.01 22915.57 89.51 0.00 0.00 5565.97 1575.38 7662.67 00:16:05.500 =================================================================================================================== 00:16:05.500 Total : 22915.57 89.51 0.00 0.00 5565.97 1575.38 7662.67 00:16:06.440 00:16:06.440 real 0m2.808s 00:16:06.440 user 0m2.475s 00:16:06.440 sys 0m0.203s 00:16:06.440 ************************************ 00:16:06.440 END TEST bdev_write_zeroes 00:16:06.440 ************************************ 00:16:06.440 14:40:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:06.440 14:40:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:06.441 14:40:57 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:06.441 14:40:57 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:06.441 14:40:57 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:06.441 14:40:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:06.441 ************************************ 00:16:06.441 START TEST bdev_json_nonenclosed 00:16:06.441 ************************************ 00:16:06.441 14:40:57 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:06.441 [2024-10-01 14:40:57.886049] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:16:06.441 [2024-10-01 14:40:57.886178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88415 ] 00:16:06.441 [2024-10-01 14:40:58.037451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.701 [2024-10-01 14:40:58.236012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.701 [2024-10-01 14:40:58.236104] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:06.701 [2024-10-01 14:40:58.236123] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:06.701 [2024-10-01 14:40:58.236133] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:06.962 ************************************ 00:16:06.962 END TEST bdev_json_nonenclosed 00:16:06.962 ************************************ 00:16:06.962 00:16:06.962 real 0m0.713s 00:16:06.962 user 0m0.506s 00:16:06.962 sys 0m0.101s 00:16:06.962 14:40:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:06.962 14:40:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:06.962 14:40:58 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:06.962 14:40:58 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:06.962 14:40:58 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:06.962 14:40:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:06.962 ************************************ 00:16:06.962 START TEST bdev_json_nonarray 00:16:06.962 ************************************ 00:16:06.962 14:40:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:06.962 [2024-10-01 14:40:58.644854] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:16:07.223 [2024-10-01 14:40:58.645151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88436 ] 00:16:07.223 [2024-10-01 14:40:58.795497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.483 [2024-10-01 14:40:58.985352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.483 [2024-10-01 14:40:58.985450] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:07.483 [2024-10-01 14:40:58.985472] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:07.483 [2024-10-01 14:40:58.985482] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:07.743 ************************************ 00:16:07.743 END TEST bdev_json_nonarray 00:16:07.743 ************************************ 00:16:07.743 00:16:07.743 real 0m0.699s 00:16:07.743 user 0m0.485s 00:16:07.743 sys 0m0.107s 00:16:07.743 14:40:59 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:07.743 14:40:59 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:07.743 14:40:59 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:16:07.743 14:40:59 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:16:07.743 14:40:59 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:16:07.743 14:40:59 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:07.743 14:40:59 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:16:07.743 14:40:59 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:07.743 14:40:59 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:07.743 14:40:59 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:16:07.743 14:40:59 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:16:07.743 14:40:59 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:16:07.743 14:40:59 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:16:07.743 00:16:07.743 real 0m43.214s 00:16:07.743 user 0m58.991s 00:16:07.743 sys 0m3.573s 00:16:07.743 ************************************ 00:16:07.743 END TEST blockdev_raid5f 00:16:07.743 ************************************ 00:16:07.743 14:40:59 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:07.743 14:40:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:07.743 14:40:59 -- spdk/autotest.sh@194 -- # uname -s 00:16:07.743 14:40:59 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:16:07.743 14:40:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:07.743 14:40:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:07.743 14:40:59 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:16:07.743 14:40:59 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:07.743 14:40:59 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:07.743 14:40:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:07.743 14:40:59 -- common/autotest_common.sh@10 -- # set +x 00:16:07.743 14:40:59 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:07.743 14:40:59 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:07.744 14:40:59 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:16:07.744 14:40:59 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:16:07.744 14:40:59 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:07.744 14:40:59 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:07.744 14:40:59 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:07.744 14:40:59 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:16:07.744 14:40:59 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:07.744 14:40:59 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:07.744 14:40:59 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:16:07.744 14:40:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:16:07.744 14:40:59 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:16:07.744 14:40:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:16:07.744 14:40:59 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:16:07.744 14:40:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:16:07.744 14:40:59 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:16:07.744 14:40:59 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:16:07.744 14:40:59 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:16:07.744 14:40:59 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:16:07.744 14:40:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:07.744 14:40:59 -- common/autotest_common.sh@10 -- # set +x 00:16:07.744 14:40:59 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:16:07.744 14:40:59 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:16:07.744 14:40:59 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:16:07.744 14:40:59 -- common/autotest_common.sh@10 -- # set +x 00:16:09.129 INFO: APP EXITING 00:16:09.129 INFO: killing all VMs 00:16:09.129 INFO: killing vhost app 00:16:09.129 INFO: EXIT DONE 00:16:09.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:09.391 Waiting for block devices as requested 00:16:09.391 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:09.391 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:09.955 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:09.955 Cleaning 00:16:09.955 Removing: /var/run/dpdk/spdk0/config 00:16:09.955 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:16:10.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:16:10.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:16:10.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:16:10.213 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:16:10.213 Removing: /var/run/dpdk/spdk0/hugepage_info 00:16:10.213 Removing: /dev/shm/spdk_tgt_trace.pid56112 00:16:10.213 Removing: /var/run/dpdk/spdk0 00:16:10.213 Removing: /var/run/dpdk/spdk_pid55910 00:16:10.213 Removing: /var/run/dpdk/spdk_pid56112 00:16:10.213 Removing: /var/run/dpdk/spdk_pid56325 00:16:10.213 Removing: /var/run/dpdk/spdk_pid56423 00:16:10.213 Removing: /var/run/dpdk/spdk_pid56468 00:16:10.213 Removing: /var/run/dpdk/spdk_pid56595 00:16:10.213 Removing: /var/run/dpdk/spdk_pid56614 00:16:10.213 Removing: /var/run/dpdk/spdk_pid56819 00:16:10.213 Removing: /var/run/dpdk/spdk_pid56912 00:16:10.213 Removing: /var/run/dpdk/spdk_pid57013 00:16:10.213 Removing: /var/run/dpdk/spdk_pid57130 00:16:10.213 Removing: /var/run/dpdk/spdk_pid57227 00:16:10.213 Removing: /var/run/dpdk/spdk_pid57261 00:16:10.213 Removing: /var/run/dpdk/spdk_pid57303 00:16:10.213 Removing: /var/run/dpdk/spdk_pid57379 00:16:10.213 Removing: /var/run/dpdk/spdk_pid57490 00:16:10.213 Removing: /var/run/dpdk/spdk_pid57927 00:16:10.213 Removing: /var/run/dpdk/spdk_pid57985 00:16:10.213 Removing: /var/run/dpdk/spdk_pid58048 00:16:10.213 Removing: /var/run/dpdk/spdk_pid58064 00:16:10.213 Removing: /var/run/dpdk/spdk_pid58172 00:16:10.213 Removing: /var/run/dpdk/spdk_pid58188 00:16:10.213 Removing: /var/run/dpdk/spdk_pid58301 00:16:10.213 Removing: /var/run/dpdk/spdk_pid58317 00:16:10.213 Removing: /var/run/dpdk/spdk_pid58375 00:16:10.213 Removing: /var/run/dpdk/spdk_pid58393 00:16:10.213 Removing: /var/run/dpdk/spdk_pid58452 00:16:10.213 Removing: /var/run/dpdk/spdk_pid58470 00:16:10.213 Removing: /var/run/dpdk/spdk_pid58641 00:16:10.213 Removing: /var/run/dpdk/spdk_pid58677 00:16:10.213 Removing: /var/run/dpdk/spdk_pid58761 00:16:10.213 Removing: /var/run/dpdk/spdk_pid60045 00:16:10.213 Removing: /var/run/dpdk/spdk_pid60246 00:16:10.213 Removing: /var/run/dpdk/spdk_pid60386 00:16:10.213 Removing: /var/run/dpdk/spdk_pid61008 00:16:10.213 Removing: /var/run/dpdk/spdk_pid61214 00:16:10.213 Removing: /var/run/dpdk/spdk_pid61354 00:16:10.213 Removing: /var/run/dpdk/spdk_pid61977 00:16:10.213 Removing: /var/run/dpdk/spdk_pid62291 00:16:10.213 Removing: /var/run/dpdk/spdk_pid62431 00:16:10.213 Removing: /var/run/dpdk/spdk_pid63755 00:16:10.213 Removing: /var/run/dpdk/spdk_pid64003 00:16:10.213 Removing: /var/run/dpdk/spdk_pid64143 00:16:10.213 Removing: /var/run/dpdk/spdk_pid65466 00:16:10.213 Removing: /var/run/dpdk/spdk_pid65704 00:16:10.213 Removing: /var/run/dpdk/spdk_pid65844 00:16:10.213 Removing: /var/run/dpdk/spdk_pid67168 00:16:10.213 Removing: /var/run/dpdk/spdk_pid67592 00:16:10.213 Removing: /var/run/dpdk/spdk_pid67732 00:16:10.213 Removing: /var/run/dpdk/spdk_pid69151 00:16:10.213 Removing: /var/run/dpdk/spdk_pid69399 00:16:10.213 Removing: /var/run/dpdk/spdk_pid69539 00:16:10.213 Removing: /var/run/dpdk/spdk_pid70948 00:16:10.213 Removing: /var/run/dpdk/spdk_pid71196 00:16:10.213 Removing: /var/run/dpdk/spdk_pid71335 00:16:10.213 Removing: /var/run/dpdk/spdk_pid72761 00:16:10.213 Removing: /var/run/dpdk/spdk_pid73226 00:16:10.213 Removing: /var/run/dpdk/spdk_pid73360 00:16:10.213 Removing: /var/run/dpdk/spdk_pid73493 00:16:10.213 Removing: /var/run/dpdk/spdk_pid73912 00:16:10.213 Removing: /var/run/dpdk/spdk_pid74624 00:16:10.213 Removing: /var/run/dpdk/spdk_pid75002 00:16:10.213 Removing: /var/run/dpdk/spdk_pid75665 00:16:10.213 Removing: /var/run/dpdk/spdk_pid76091 00:16:10.213 Removing: /var/run/dpdk/spdk_pid76833 00:16:10.213 Removing: /var/run/dpdk/spdk_pid77223 00:16:10.213 Removing: /var/run/dpdk/spdk_pid79128 00:16:10.213 Removing: /var/run/dpdk/spdk_pid79555 00:16:10.213 Removing: /var/run/dpdk/spdk_pid79985 00:16:10.213 Removing: /var/run/dpdk/spdk_pid81986 00:16:10.213 Removing: /var/run/dpdk/spdk_pid82455 00:16:10.213 Removing: /var/run/dpdk/spdk_pid82960 00:16:10.213 Removing: /var/run/dpdk/spdk_pid83985 00:16:10.213 Removing: /var/run/dpdk/spdk_pid84301 00:16:10.213 Removing: /var/run/dpdk/spdk_pid85206 00:16:10.213 Removing: /var/run/dpdk/spdk_pid85512 00:16:10.213 Removing: /var/run/dpdk/spdk_pid86427 00:16:10.213 Removing: /var/run/dpdk/spdk_pid86743 00:16:10.213 Removing: /var/run/dpdk/spdk_pid87393 00:16:10.213 Removing: /var/run/dpdk/spdk_pid87662 00:16:10.213 Removing: /var/run/dpdk/spdk_pid87718 00:16:10.213 Removing: /var/run/dpdk/spdk_pid87761 00:16:10.213 Removing: /var/run/dpdk/spdk_pid87999 00:16:10.213 Removing: /var/run/dpdk/spdk_pid88175 00:16:10.213 Removing: /var/run/dpdk/spdk_pid88269 00:16:10.213 Removing: /var/run/dpdk/spdk_pid88362 00:16:10.213 Removing: /var/run/dpdk/spdk_pid88415 00:16:10.213 Removing: /var/run/dpdk/spdk_pid88436 00:16:10.213 Clean 00:16:10.213 14:41:01 -- common/autotest_common.sh@1451 -- # return 0 00:16:10.213 14:41:01 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:16:10.213 14:41:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:10.213 14:41:01 -- common/autotest_common.sh@10 -- # set +x 00:16:10.470 14:41:01 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:16:10.470 14:41:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:10.470 14:41:01 -- common/autotest_common.sh@10 -- # set +x 00:16:10.470 14:41:01 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:16:10.470 14:41:01 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:16:10.470 14:41:01 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:16:10.470 14:41:01 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:16:10.470 14:41:01 -- spdk/autotest.sh@394 -- # hostname 00:16:10.470 14:41:01 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:16:10.470 geninfo: WARNING: invalid characters removed from testname! 00:16:32.460 14:41:23 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:34.989 14:41:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:37.516 14:41:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:39.436 14:41:30 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:41.333 14:41:32 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:43.862 14:41:35 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:16:45.763 14:41:37 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:16:45.763 14:41:37 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:16:45.763 14:41:37 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:16:45.763 14:41:37 -- common/autotest_common.sh@1681 -- $ lcov --version 00:16:46.022 14:41:37 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:16:46.022 14:41:37 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:16:46.022 14:41:37 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:16:46.022 14:41:37 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:16:46.022 14:41:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:16:46.022 14:41:37 -- scripts/common.sh@336 -- $ read -ra ver1 00:16:46.022 14:41:37 -- scripts/common.sh@337 -- $ IFS=.-: 00:16:46.022 14:41:37 -- scripts/common.sh@337 -- $ read -ra ver2 00:16:46.022 14:41:37 -- scripts/common.sh@338 -- $ local 'op=<' 00:16:46.022 14:41:37 -- scripts/common.sh@340 -- $ ver1_l=2 00:16:46.022 14:41:37 -- scripts/common.sh@341 -- $ ver2_l=1 00:16:46.022 14:41:37 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:16:46.022 14:41:37 -- scripts/common.sh@344 -- $ case "$op" in 00:16:46.022 14:41:37 -- scripts/common.sh@345 -- $ : 1 00:16:46.022 14:41:37 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:16:46.022 14:41:37 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.022 14:41:37 -- scripts/common.sh@365 -- $ decimal 1 00:16:46.022 14:41:37 -- scripts/common.sh@353 -- $ local d=1 00:16:46.022 14:41:37 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:16:46.022 14:41:37 -- scripts/common.sh@355 -- $ echo 1 00:16:46.022 14:41:37 -- scripts/common.sh@365 -- $ ver1[v]=1 00:16:46.022 14:41:37 -- scripts/common.sh@366 -- $ decimal 2 00:16:46.022 14:41:37 -- scripts/common.sh@353 -- $ local d=2 00:16:46.022 14:41:37 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:16:46.022 14:41:37 -- scripts/common.sh@355 -- $ echo 2 00:16:46.022 14:41:37 -- scripts/common.sh@366 -- $ ver2[v]=2 00:16:46.022 14:41:37 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:16:46.022 14:41:37 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:16:46.022 14:41:37 -- scripts/common.sh@368 -- $ return 0 00:16:46.022 14:41:37 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.022 14:41:37 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:16:46.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.022 --rc genhtml_branch_coverage=1 00:16:46.022 --rc genhtml_function_coverage=1 00:16:46.022 --rc genhtml_legend=1 00:16:46.022 --rc geninfo_all_blocks=1 00:16:46.022 --rc geninfo_unexecuted_blocks=1 00:16:46.022 00:16:46.022 ' 00:16:46.022 14:41:37 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:16:46.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.022 --rc genhtml_branch_coverage=1 00:16:46.022 --rc genhtml_function_coverage=1 00:16:46.022 --rc genhtml_legend=1 00:16:46.022 --rc geninfo_all_blocks=1 00:16:46.022 --rc geninfo_unexecuted_blocks=1 00:16:46.022 00:16:46.022 ' 00:16:46.022 14:41:37 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:16:46.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.022 --rc genhtml_branch_coverage=1 00:16:46.022 --rc genhtml_function_coverage=1 00:16:46.022 --rc genhtml_legend=1 00:16:46.022 --rc geninfo_all_blocks=1 00:16:46.022 --rc geninfo_unexecuted_blocks=1 00:16:46.022 00:16:46.022 ' 00:16:46.022 14:41:37 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:16:46.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.022 --rc genhtml_branch_coverage=1 00:16:46.022 --rc genhtml_function_coverage=1 00:16:46.022 --rc genhtml_legend=1 00:16:46.022 --rc geninfo_all_blocks=1 00:16:46.022 --rc geninfo_unexecuted_blocks=1 00:16:46.022 00:16:46.022 ' 00:16:46.022 14:41:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:46.022 14:41:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:16:46.022 14:41:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:16:46.022 14:41:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.022 14:41:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.022 14:41:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.022 14:41:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.022 14:41:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.022 14:41:37 -- paths/export.sh@5 -- $ export PATH 00:16:46.022 14:41:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.022 14:41:37 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:16:46.022 14:41:37 -- common/autobuild_common.sh@479 -- $ date +%s 00:16:46.022 14:41:37 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727793697.XXXXXX 00:16:46.022 14:41:37 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727793697.66oLat 00:16:46.022 14:41:37 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:16:46.022 14:41:37 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:16:46.022 14:41:37 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:16:46.022 14:41:37 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:16:46.022 14:41:37 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:16:46.022 14:41:37 -- common/autobuild_common.sh@495 -- $ get_config_params 00:16:46.022 14:41:37 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:16:46.022 14:41:37 -- common/autotest_common.sh@10 -- $ set +x 00:16:46.022 14:41:37 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:16:46.022 14:41:37 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:16:46.022 14:41:37 -- pm/common@17 -- $ local monitor 00:16:46.023 14:41:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:46.023 14:41:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:46.023 14:41:37 -- pm/common@25 -- $ sleep 1 00:16:46.023 14:41:37 -- pm/common@21 -- $ date +%s 00:16:46.023 14:41:37 -- pm/common@21 -- $ date +%s 00:16:46.023 14:41:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727793697 00:16:46.023 14:41:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727793697 00:16:46.023 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727793697_collect-vmstat.pm.log 00:16:46.023 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727793697_collect-cpu-load.pm.log 00:16:46.958 14:41:38 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:16:46.958 14:41:38 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:16:46.958 14:41:38 -- spdk/autopackage.sh@14 -- $ timing_finish 00:16:46.958 14:41:38 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:16:46.958 14:41:38 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:16:46.958 14:41:38 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:16:46.958 14:41:38 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:16:46.958 14:41:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:16:46.958 14:41:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:16:46.958 14:41:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:46.958 14:41:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:16:46.958 14:41:38 -- pm/common@44 -- $ pid=89917 00:16:46.958 14:41:38 -- pm/common@50 -- $ kill -TERM 89917 00:16:46.958 14:41:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:46.958 14:41:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:16:46.958 14:41:38 -- pm/common@44 -- $ pid=89918 00:16:46.958 14:41:38 -- pm/common@50 -- $ kill -TERM 89918 00:16:46.958 + [[ -n 4987 ]] 00:16:46.958 + sudo kill 4987 00:16:46.966 [Pipeline] } 00:16:46.983 [Pipeline] // timeout 00:16:46.987 [Pipeline] } 00:16:46.998 [Pipeline] // stage 00:16:47.004 [Pipeline] } 00:16:47.020 [Pipeline] // catchError 00:16:47.027 [Pipeline] stage 00:16:47.029 [Pipeline] { (Stop VM) 00:16:47.040 [Pipeline] sh 00:16:47.325 + vagrant halt 00:16:49.850 ==> default: Halting domain... 00:16:53.142 [Pipeline] sh 00:16:53.419 + vagrant destroy -f 00:16:55.946 ==> default: Removing domain... 00:16:55.957 [Pipeline] sh 00:16:56.234 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:16:56.244 [Pipeline] } 00:16:56.258 [Pipeline] // stage 00:16:56.263 [Pipeline] } 00:16:56.276 [Pipeline] // dir 00:16:56.281 [Pipeline] } 00:16:56.294 [Pipeline] // wrap 00:16:56.299 [Pipeline] } 00:16:56.310 [Pipeline] // catchError 00:16:56.319 [Pipeline] stage 00:16:56.321 [Pipeline] { (Epilogue) 00:16:56.333 [Pipeline] sh 00:16:56.612 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:17:01.947 [Pipeline] catchError 00:17:01.949 [Pipeline] { 00:17:01.961 [Pipeline] sh 00:17:02.239 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:17:02.239 Artifacts sizes are good 00:17:02.247 [Pipeline] } 00:17:02.261 [Pipeline] // catchError 00:17:02.271 [Pipeline] archiveArtifacts 00:17:02.277 Archiving artifacts 00:17:02.390 [Pipeline] cleanWs 00:17:02.404 [WS-CLEANUP] Deleting project workspace... 00:17:02.404 [WS-CLEANUP] Deferred wipeout is used... 00:17:02.410 [WS-CLEANUP] done 00:17:02.411 [Pipeline] } 00:17:02.425 [Pipeline] // stage 00:17:02.431 [Pipeline] } 00:17:02.444 [Pipeline] // node 00:17:02.449 [Pipeline] End of Pipeline 00:17:02.476 Finished: SUCCESS